Binary Classification using Tensorflow and Keras¶

Aladdin Alakhras, University of Missouri-St. Louis

Problem:¶

  • Abalones are endangered marine snails found in cold coastal waters worldwide, Their price is positively correlated with age. However, determining abalone age is complex. A machine learning model to classify abalone age would significantly accelerate this manual process, benefiting abalone researchers and adding value to the field.

  • Dataset: Abalone.csv

    • This project classifies abalone snails as "young" or "old" based on their ring count, using input features such as gender, height, weight, etc.
  • Original data at UCI ML database

Phase 1: Exploratory Data Analysis & preparation of the Abalone Data Set¶

We are going to use physical and biological attributes of abalone to predict whether an abalone is old or young. There are 4177 observations in this data set and 8 features. We have sex as a categorical feature indicating whether an abalone is male, female or infant. We have other 7 numeric features which describe the size and weight of abalones. The missing values have been removed in the original research so we do not have missing values here.

Step 1: Load the Dataset, Clean It, and Preview Its Shape and the First 5 Rows#¶

In [96]:
import os
import shap
import random
import numpy as np
import pandas as pd
import altair as alt
import seaborn as sns
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Sequential
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow import keras
from keras.callbacks import EarlyStopping, ModelCheckpoint
import matplotlib.pyplot as plt
from prettytable import PrettyTable
from IPython.display import display, Markdown
from sklearn.metrics import precision_score, recall_score, f1_score
from tensorflow.keras import layers, models # Import the models module
import warnings
warnings.filterwarnings('ignore')
In [97]:
BASE_PATH = 'https://raw.githubusercontent.com/Alakhras/Abalone-Age/main/Abalone.csv'
dataset1 = pd.read_csv(BASE_PATH)
print(dataset1.shape)
dataset1.head()
(4177, 9)
Out[97]:
Sex Length Diameter Height Whole weight Shucked weight Viscera weight Shell weight Rings
0 M 0.455 0.365 0.095 0.5140 0.2245 0.1010 0.150 15
1 M 0.350 0.265 0.090 0.2255 0.0995 0.0485 0.070 7
2 F 0.530 0.420 0.135 0.6770 0.2565 0.1415 0.210 9
3 M 0.440 0.365 0.125 0.5160 0.2155 0.1140 0.155 10
4 I 0.330 0.255 0.080 0.2050 0.0895 0.0395 0.055 7

The minimum value in height is zero. There are only two rows with zero values. We can remove these two rows from our dataset and consider the rest of the data for the model.

In [99]:
dataset1 = dataset1[dataset1.Height != 0]
print(dataset1.shape)
(4175, 9)
  • Step 2: Prepare the output
  • Step 3: Shuffle the rows
  • Step 4: Split into Training/Validation Set

We categorize abalones based on their age, designating those with more than 10 rings as "old" and those with 10 rings or fewer as "young." This threshold is arbitrary, and we observe an imbalanced distribution of the target variable, which could affect our predictive modeling efforts. Addressing this imbalance is crucial for improving model performance and ensuring reliable classifications.

In [100]:
dataset1["Is old"] = np.where(dataset1["Rings"] > 9, "Old", "Young")
dataset1.head()
Out[100]:
Sex Length Diameter Height Whole weight Shucked weight Viscera weight Shell weight Rings Is old
0 M 0.455 0.365 0.095 0.5140 0.2245 0.1010 0.150 15 Old
1 M 0.350 0.265 0.090 0.2255 0.0995 0.0485 0.070 7 Young
2 F 0.530 0.420 0.135 0.6770 0.2565 0.1415 0.210 9 Young
3 M 0.440 0.365 0.125 0.5160 0.2155 0.1140 0.155 10 Old
4 I 0.330 0.255 0.080 0.2050 0.0895 0.0395 0.055 7 Young

Then check if the data is imbalanced by calculating what percentage of the output labels are 0 and what percentage are 1 which is:

Target Old Young

| Observations | 2081 | 2094 |. | Percentage | 49.8% | 50.2% |.

The dataset exhibits some imbalance but is sufficiently balanced for our analysis, allowing us to proceed with our modeling efforts.

Shuffling the dataset for reliability¶

Convert 'Sex' and 'Is old' columns to numeric representation¶

In [101]:
# Shuffle the dataset
dataset1 = dataset1.sample(frac=1).reset_index(drop=True)
np.set_printoptions(formatter={'float': lambda x: "{0:0.2f}".format(x)})
# Convert 'Sex' column to numeric representation
dataset1['Sex'] = dataset1['Sex'].map({'M': 1, 'F': 0, 'I': 2}).astype(float)  # Apply astype(float) to the Series after mapping
#Convert 'Is old' column to numeric representation
dataset1['Is old'] = dataset1['Is old'].map({'Old': 1, 'Young': 0}).astype(float)  # Apply astype(float) to the Series after mapping
#dataset = dataset2.to_numpy()
np.set_printoptions(formatter={'float': lambda x: "{0:0.2f}".format(x)})
print(dataset1.iloc[0:5, :].to_string()) # Use to_string() for a single table output
   Sex  Length  Diameter  Height  Whole weight  Shucked weight  Viscera weight  Shell weight  Rings  Is old
0  1.0   0.530     0.410   0.140        0.7545          0.3495          0.1715        0.2105      8     0.0
1  1.0   0.550     0.385   0.130        0.7275          0.3430          0.1625        0.1900      8     0.0
2  2.0   0.365     0.270   0.105        0.2155          0.0915          0.0475        0.0630      6     0.0
3  1.0   0.520     0.395   0.125        0.8115          0.4035          0.1660        0.2000      7     0.0
4  2.0   0.510     0.390   0.125        0.5970          0.2930          0.1265        0.1555      8     0.0

Split into Training/Validation Set¶

In [102]:
index_20percent = int(0.2 * len(dataset1))  # Get 20% of the total number of rows
print(index_20percent)
# Create validation dataset (features and target together)
VALIDATION_DATA = dataset1.iloc[:index_20percent]
# Create training dataset (features and target together)
TRAINING_DATA = dataset1.iloc[index_20percent:]

XVALIDATION = dataset1.iloc[:index_20percent, :-1].values
YVALIDATION = dataset1.iloc[:index_20percent, -1].values

XTRAIN = dataset1.iloc[index_20percent:, :-1].values
YTRAIN = dataset1.iloc[index_20percent:, -1].values
plt.hist(XTRAIN[:, 8])
plt.ylabel('8th Column (fixed Rings)')
plt.show()
XTRAIN_df = pd.DataFrame(XTRAIN)
XTRAIN_df.head()
835
No description has been provided for this image
Out[102]:
0 1 2 3 4 5 6 7 8
0 1.0 0.500 0.420 0.125 0.6200 0.2550 0.1500 0.2050 11.0
1 2.0 0.325 0.225 0.075 0.1390 0.0565 0.0320 0.0900 6.0
2 2.0 0.635 0.500 0.165 1.4890 0.7150 0.3445 0.3615 13.0
3 1.0 0.440 0.350 0.110 0.4585 0.2000 0.0885 0.1300 9.0
4 1.0 0.510 0.395 0.145 0.6185 0.2160 0.1385 0.2400 12.0

Exploratory data analysis on training & validation sets¶

3.1 Target variable distribution¶

Exploratory data analysis gives us a basic understanding of our train and validation data, and we may also find the some useful features to make young-old prediction for abalones. First, we plot the distribution of our target variable. The young-old class is derived from rings, so we also include the distribution of rings into our plot. From the figure we can observe a slightly right skewed distribution of rings. Since we set the threshold for old abalone as rings > 10 , we get an unbalanced distribution of old and young abalone.

In [103]:
# Distribution of our target: rings and is_old
alt.Chart(TRAINING_DATA, title="Distribution of target variables").mark_bar().encode(
    alt.X(alt.repeat(), type="nominal"), alt.Y("count()")
).repeat(["Rings", "Is old"])
Out[103]:

3.2 Distribution of categorical variable¶

Sex is the only categorical variable in this data set. It has three categories: male, female and infant, denoted as M, F, I respectively. The distribution of sex is balanced.

In [104]:
# plot sex distribution
def plot_pie_chart():
    # Access the 'Sex' column instead of 'sex' (assuming the column name is 'Sex')
    values = TRAINING_DATA['Sex'].value_counts()
    labels = ['Male', 'Female', 'Infant']
    plt.pie(values, labels=labels, autopct=lambda p: f'{p: .2f}%')
    plt.title('Sex Distribution', size=14)
    plt.show()
plot_pie_chart()
No description has been provided for this image

3.3 Distribution of continuous variables¶

We first get a summary table from our data set. However, this table does not contain intuitive information to help us make prediction.

In [105]:
# Now you can use describe()
TRAINING_DATA.describe()
Out[105]:
Sex Length Diameter Height Whole weight Shucked weight Viscera weight Shell weight Rings Is old
count 3340.000000 3340.000000 3340.000000 3340.000000 3340.000000 3340.000000 3340.000000 3340.000000 3340.000000 3340.000000
mean 1.006886 0.524132 0.407898 0.139735 0.833519 0.361556 0.181734 0.239918 9.909581 0.501497
std 0.796977 0.121868 0.100690 0.042905 0.497206 0.224503 0.111013 0.141483 3.219842 0.500073
min 0.000000 0.075000 0.055000 0.010000 0.002000 0.001000 0.000500 0.001500 1.000000 0.000000
25% 0.000000 0.450000 0.345000 0.115000 0.438500 0.181875 0.091875 0.129000 8.000000 0.000000
50% 1.000000 0.545000 0.425000 0.140000 0.804250 0.339750 0.171500 0.235000 10.000000 1.000000
75% 2.000000 0.620000 0.485000 0.165000 1.166625 0.508625 0.257000 0.330000 11.000000 1.000000
max 2.000000 0.815000 0.650000 1.130000 2.825500 1.488000 0.641500 1.005000 29.000000 1.000000

Then we plot the distribution of all numeric features within two targeted classes.From the plot we can group the numeric variables into three groups: (length, diameter), (height), and (whole_weight, shucked_weight, viscera_weight, shell_weight).
The first group is left skewed. The means of two classes are similar and the old abalones have less deviation from mean.The second group has some outliers and the third group is right skewed. In the third group, we can observe a difference in mean weights and the distribution of old abalones are more bell-shaped.

In [106]:
# Distribution of numeric variables: 'length', 'diameter', 'height', 'whole_weight', 'shucked_weight', 'viscera_weight', 'shell_weight'
alt.Chart(TRAINING_DATA, title="Distribution of numeric variables").mark_bar(
    opacity=0.5
).encode(
    alt.X(alt.repeat(), type="quantitative", bin=alt.Bin(maxbins=50)),
    alt.Y("count()", stack=None),
    color=alt.Color("Is old:O", scale=alt.Scale(range=['#2ca02c', '#d62728']))  # Custom ordinal color palette
).repeat(
    [
        "Length",
        "Diameter",
        "Height",
        "Whole weight",
        "Shucked weight",
        "Viscera weight",
        "Shell weight",
    ],
    columns=2,
)
Out[106]:

3.4 Correlation Analysis with Target Variable¶

In our analysis, we aim to explore the correlation between the selected predictor variables—length, height, and whole weight—and the target variable, rings. Our focus is also on identifying potential differences between young and old abalones.

Preliminary observations indicate a noticeable distinction in the regression lines when analyzing the relationships between the predictor variables (length and whole weight) and the target variable (rings) across the two classes. This suggests that the age classification of abalones may influence their physical characteristics and, consequently, their correlation with the number of rings.

In [107]:
# Base scatter plot for Length vs. Rings
point_length = (
    alt.Chart(
        TRAINING_DATA,
        title="A Difference in Correlation Between Length and Rings for Old and Young Abalones",
    )
    .mark_circle(opacity=0.3)
    .encode(
        x=alt.X("Length:Q", title="Length"),
        y=alt.Y("Rings:Q", title="Rings"),
        color=alt.Color("Is old:N", title="Is Old")  # Ensure 'Is old' is treated as nominal
    )
)

# Regression lines for each group
regression_lines = point_length.transform_regression(
    "Length", "Rings", groupby=["Is old"]
).mark_line(color="red")

# Combine scatter plot and regression lines
combined_chart = point_length + regression_lines

# Display the combined chart
combined_chart.display()
In [108]:
# Base scatter plot for Height vs. Rings
point_height = (
    alt.Chart(
        TRAINING_DATA,
        title="A Difference in Correlation Between Height and Rings for Old and Young Abalones",
    )
    .mark_circle(opacity=0.2)
    .encode(
        x=alt.X("Height:Q", title="Height"),
        y=alt.Y("Rings:Q", title="Rings"),
        color=alt.Color("Is old:N", title="Is Old")  # Ensure 'Is old' is treated as nominal
    )
)

# Regression lines for each group
regression_lines = point_height.transform_regression(
    "Height", "Rings", groupby=["Is old"]
).mark_line(color="red")

# Combine scatter plot and regression lines
combined_chart = point_height + regression_lines

# Display the combined chart
combined_chart.display()
In [109]:
# Base scatter plot for Whole Weight vs. Rings
point_weight = (
    alt.Chart(
        TRAINING_DATA,
        title="A Difference in Correlation Between Whole Weight and Rings for Old and Young Abalones",
    )
    .mark_circle(opacity=0.3)
    .encode(
        x=alt.X("Whole weight:Q", title="Whole Weight"),  # Ensure whole weight is treated as quantitative
        y=alt.Y("Rings:Q", title="Rings"),                # Ensure rings is treated as quantitative
        color=alt.Color("Is old:N", title="Is Old")       # Treat 'Is old' as nominal
    )
)

# Regression lines for each group
regression_lines = point_weight.transform_regression(
    "Whole weight", "Rings", groupby=["Is old"]
).mark_line(color="red")

# Combine scatter plot and regression lines
combined_chart = point_weight + regression_lines

# Display the combined chart
combined_chart.display()

3.5 Scatter Plots Showing the Relationship Between Continuous Features¶

We expect collinearity among the numeric features. Size measurements—length, diameter, and height—are strongly correlated with each other and also correlate with weight. This can be a concern for models sensitive to highly correlated features.

In the first group of scatter plots, we see that length, diameter, and height are linearly correlated. The relationship between size and weight, however, appears to be more non-linear. In the second group of scatter plots focused on weight, the correlations among weight features exist but are generally weaker than those among size features. The relationship between weight and age (rings) is not very clear, but there seems to be a slight positive correlation, suggesting that older abalones may weigh more.

In [110]:
# Define the features to compare
features = ["Length", "Diameter", "Height", "Whole weight"]

# Create a scatter plot matrix for the specified features
scatter_matrix = alt.Chart(TRAINING_DATA, title="Scatter Plot Matrix of Numeric Variables").mark_point(
    size=5, opacity=0.1
).encode(
    x=alt.X(alt.repeat("row"), type="quantitative"),
    y=alt.Y(alt.repeat("column"), type="quantitative"),
).properties(
    height=200, width=200
).repeat(
    row=features,  # Rows correspond to features for the y-axis
    column=features  # Columns correspond to features for the x-axis
)

# Display the scatter plot matrix
scatter_matrix.display()
In [111]:
# Define the features to compare
features = ["Shucked weight", "Viscera weight", "Shell weight", "Rings"]

# Create a scatter plot matrix for the specified features
scatter_matrix = alt.Chart(TRAINING_DATA, title="Scatter Plot Matrix of Numeric Variables").mark_point(
    size=5, opacity=0.1
).encode(
    x=alt.X(alt.repeat("column"), type="quantitative"),  # X-axis uses features for columns
    y=alt.Y(alt.repeat("row"), type="quantitative"),     # Y-axis uses features for rows
).properties(
    height=200, width=200
).repeat(
    column=features,  # Columns correspond to features for the x-axis
    row=features      # Rows correspond to features for the y-axis
)

# Display the scatter plot matrix
scatter_matrix.display()

3.6 Correlation Heat Map¶

A correlation heat map offers a visually intuitive representation of the relationships between all variables in our dataset. By examining the heat map, we can identify that the feature variables exhibit high levels of correlation. This visualization allows us to easily discern patterns and potential multicollinearity among the numeric features, which is critical for informing our modeling decisions and understanding the underlying data structure.

In [112]:
# Calculate the correlation matrix only for numeric columns.
plt.figure(figsize=(8, 6))  # Adjust the figure size if desired
sns.heatmap(TRAINING_DATA.select_dtypes(include='number').corr(), annot=True, cmap='coolwarm')
plt.show()
No description has been provided for this image

Phase 2: Build a model to overfit the entire dataset¶

  • Step 7: Create a neural network model
  • Step 8: Compile the model
  • Step 9: Train the model
  • Step 10: Check the learning curves
  • Step 11: Evaluate the model on the dataset
  • Step 13: Check what what the model actually predicts
  • Step 14: Is 'accuracy' sufficient to evaluate our model?

we want to determine how big architecture we need to overfit the data. The place to start is to use ‘logistic regression’ model and train for as many epochs as needed to obtain as high accuracy as possible so we will use the dataset without splitting then normalize it.

In [113]:
# Separate features and target
X = dataset1.iloc[:, :-1].values
y = dataset1.iloc[:, -1].values

# Normalize features
X_min = np.min(X, axis=0)
X_max = np.max(X, axis=0)
X_normalized = (X - X_min) / (X_max - X_min)

Model Creation and Training:¶

The function create_compile_model initializes a neural network based on specified architecture. The initial model is set up as a logistic regression with one neuron. Training the model produces learning curves that illustrate how loss and accuracy evolve over epochs, providing insights into model performance.

In [114]:
# 1. Model Creation and Training
def create_compile_model(input_shape, layers):
    model = Sequential()  # Initialize a Sequential model
    for i, neurons in enumerate(layers):
        if i == 0:
            model.add(Dense(neurons, input_shape=input_shape, activation='relu'))  # Input layer with ReLU activation
        else:
            model.add(Dense(neurons, activation='relu'))  # Hidden layers with ReLU activation
    model.add(Dense(1, activation='sigmoid'))  # Output layer for binary classification

    # Compile the model
    model.compile(optimizer=Adam(), loss='binary_crossentropy', metrics=['accuracy'])
    return model

# The initial model starts as a logistic regression model (1 neuron)
model = create_compile_model((X_normalized.shape[1],), [1])
In [115]:
# Callbacks for ModelCheckpoint and EarlyStopping
checkpoint = ModelCheckpoint('best_model.keras', save_best_only=True, monitor='val_loss', mode='min', verbose=1)
early_stopping = EarlyStopping(monitor='val_loss', patience=40, verbose=1, restore_best_weights=True)
# Fit the model with callbacks
history = model.fit(X_normalized, y,
                    epochs=200,
                    verbose=1,
                    callbacks=[checkpoint, early_stopping])  # Add the callbacks
Epoch 1/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - accuracy: 0.5802 - loss: 0.6774
Epoch 2/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.6950 - loss: 0.6335
Epoch 3/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7457 - loss: 0.6120
Epoch 4/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7734 - loss: 0.5946
Epoch 5/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7838 - loss: 0.5744
Epoch 6/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7839 - loss: 0.5619
Epoch 7/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7781 - loss: 0.5443
Epoch 8/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7653 - loss: 0.5468
Epoch 9/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7625 - loss: 0.5344
Epoch 10/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7667 - loss: 0.5323
Epoch 11/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7605 - loss: 0.5218
Epoch 12/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7540 - loss: 0.5259
Epoch 13/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7685 - loss: 0.5053
Epoch 14/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7647 - loss: 0.5114
Epoch 15/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7755 - loss: 0.4977
Epoch 16/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7632 - loss: 0.4988
Epoch 17/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7733 - loss: 0.4946
Epoch 18/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7743 - loss: 0.4877
Epoch 19/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7795 - loss: 0.4874
Epoch 20/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7768 - loss: 0.4827
Epoch 21/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7806 - loss: 0.4843
Epoch 22/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7710 - loss: 0.4946
Epoch 23/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7781 - loss: 0.4776
Epoch 24/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7808 - loss: 0.4739
Epoch 25/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7919 - loss: 0.4669
Epoch 26/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7989 - loss: 0.4644
Epoch 27/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.7940 - loss: 0.4597
Epoch 28/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7949 - loss: 0.4599
Epoch 29/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8146 - loss: 0.4400
Epoch 30/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8175 - loss: 0.4238
Epoch 31/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8262 - loss: 0.4140
Epoch 32/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8340 - loss: 0.4021
Epoch 33/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8357 - loss: 0.3946
Epoch 34/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8567 - loss: 0.3763
Epoch 35/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8657 - loss: 0.3680
Epoch 36/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8685 - loss: 0.3628
Epoch 37/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8781 - loss: 0.3534
Epoch 38/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8836 - loss: 0.3400
Epoch 39/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8810 - loss: 0.3332
Epoch 40/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8860 - loss: 0.3380
Epoch 41/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8895 - loss: 0.3261
Epoch 42/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8943 - loss: 0.3199
Epoch 43/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9027 - loss: 0.3129
Epoch 44/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.8974 - loss: 0.3127
Epoch 45/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9036 - loss: 0.2982
Epoch 46/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9069 - loss: 0.2932
Epoch 47/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9063 - loss: 0.2883
Epoch 48/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9159 - loss: 0.2750
Epoch 49/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9147 - loss: 0.2777
Epoch 50/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9210 - loss: 0.2660
Epoch 51/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9148 - loss: 0.2698
Epoch 52/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9212 - loss: 0.2636
Epoch 53/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9203 - loss: 0.2597
Epoch 54/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9292 - loss: 0.2615
Epoch 55/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9309 - loss: 0.2472
Epoch 56/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9342 - loss: 0.2462
Epoch 57/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9395 - loss: 0.2368
Epoch 58/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9330 - loss: 0.2376
Epoch 59/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9403 - loss: 0.2300
Epoch 60/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9460 - loss: 0.2235
Epoch 61/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9429 - loss: 0.2245
Epoch 62/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9423 - loss: 0.2200
Epoch 63/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9428 - loss: 0.2205
Epoch 64/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9520 - loss: 0.2110
Epoch 65/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9473 - loss: 0.2081
Epoch 66/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9495 - loss: 0.2092
Epoch 67/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9539 - loss: 0.2109
Epoch 68/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9509 - loss: 0.2047
Epoch 69/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9549 - loss: 0.1940
Epoch 70/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9578 - loss: 0.1940
Epoch 71/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9611 - loss: 0.1913
Epoch 72/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9617 - loss: 0.1805
Epoch 73/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9589 - loss: 0.1909
Epoch 74/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9618 - loss: 0.1858
Epoch 75/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9640 - loss: 0.1842
Epoch 76/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9603 - loss: 0.1778
Epoch 77/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9626 - loss: 0.1797
Epoch 78/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9677 - loss: 0.1718
Epoch 79/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9706 - loss: 0.1729
Epoch 80/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9703 - loss: 0.1659
Epoch 81/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9706 - loss: 0.1734
Epoch 82/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9737 - loss: 0.1634
Epoch 83/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9742 - loss: 0.1609
Epoch 84/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9721 - loss: 0.1588
Epoch 85/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9724 - loss: 0.1595
Epoch 86/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9780 - loss: 0.1569
Epoch 87/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9800 - loss: 0.1483
Epoch 88/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9777 - loss: 0.1472
Epoch 89/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9815 - loss: 0.1527
Epoch 90/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9794 - loss: 0.1403
Epoch 91/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9810 - loss: 0.1448
Epoch 92/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9799 - loss: 0.1424
Epoch 93/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9827 - loss: 0.1332
Epoch 94/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9813 - loss: 0.1400
Epoch 95/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9822 - loss: 0.1369
Epoch 96/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9855 - loss: 0.1358
Epoch 97/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9843 - loss: 0.1316
Epoch 98/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9864 - loss: 0.1332
Epoch 99/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9872 - loss: 0.1304
Epoch 100/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9871 - loss: 0.1272
Epoch 101/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9853 - loss: 0.1278
Epoch 102/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9872 - loss: 0.1252
Epoch 103/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9899 - loss: 0.1235
Epoch 104/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9892 - loss: 0.1198
Epoch 105/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9876 - loss: 0.1209
Epoch 106/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9918 - loss: 0.1142
Epoch 107/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9899 - loss: 0.1142
Epoch 108/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9877 - loss: 0.1181
Epoch 109/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9909 - loss: 0.1141
Epoch 110/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9904 - loss: 0.1152
Epoch 111/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9891 - loss: 0.1113
Epoch 112/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9909 - loss: 0.1106
Epoch 113/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9919 - loss: 0.1083
Epoch 114/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9903 - loss: 0.1078
Epoch 115/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9934 - loss: 0.1051
Epoch 116/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9945 - loss: 0.1055
Epoch 117/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9933 - loss: 0.0972
Epoch 118/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9897 - loss: 0.1043
Epoch 119/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9925 - loss: 0.1016
Epoch 120/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9906 - loss: 0.1030
Epoch 121/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9936 - loss: 0.0976
Epoch 122/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9936 - loss: 0.0955
Epoch 123/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9940 - loss: 0.0938
Epoch 124/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9948 - loss: 0.0948
Epoch 125/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9953 - loss: 0.0900
Epoch 126/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9938 - loss: 0.0933
Epoch 127/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9944 - loss: 0.0888
Epoch 128/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9952 - loss: 0.0899
Epoch 129/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9928 - loss: 0.0876
Epoch 130/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9923 - loss: 0.0888
Epoch 131/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9952 - loss: 0.0858
Epoch 132/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9954 - loss: 0.0839
Epoch 133/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9953 - loss: 0.0850
Epoch 134/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.9950 - loss: 0.0858
Epoch 135/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9949 - loss: 0.0816
Epoch 136/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9968 - loss: 0.0809
Epoch 137/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9967 - loss: 0.0807
Epoch 138/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9962 - loss: 0.0792
Epoch 139/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9967 - loss: 0.0754
Epoch 140/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9960 - loss: 0.0782
Epoch 141/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9972 - loss: 0.0776
Epoch 142/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9955 - loss: 0.0745
Epoch 143/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9961 - loss: 0.0731
Epoch 144/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9963 - loss: 0.0785
Epoch 145/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9956 - loss: 0.0733
Epoch 146/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9976 - loss: 0.0734
Epoch 147/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9959 - loss: 0.0722
Epoch 148/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9965 - loss: 0.0687
Epoch 149/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9971 - loss: 0.0703
Epoch 150/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9965 - loss: 0.0672
Epoch 151/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9977 - loss: 0.0695
Epoch 152/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9975 - loss: 0.0693
Epoch 153/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9962 - loss: 0.0668
Epoch 154/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9975 - loss: 0.0653
Epoch 155/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9979 - loss: 0.0653
Epoch 156/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9970 - loss: 0.0637
Epoch 157/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9975 - loss: 0.0641
Epoch 158/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9970 - loss: 0.0617
Epoch 159/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9975 - loss: 0.0613
Epoch 160/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9988 - loss: 0.0607
Epoch 161/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9974 - loss: 0.0580
Epoch 162/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9967 - loss: 0.0591
Epoch 163/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9966 - loss: 0.0615
Epoch 164/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9977 - loss: 0.0557
Epoch 165/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9981 - loss: 0.0525
Epoch 166/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9973 - loss: 0.0569
Epoch 167/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9976 - loss: 0.0534
Epoch 168/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9961 - loss: 0.0572
Epoch 169/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9972 - loss: 0.0544
Epoch 170/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9990 - loss: 0.0547
Epoch 171/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9983 - loss: 0.0543
Epoch 172/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9968 - loss: 0.0527
Epoch 173/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.9983 - loss: 0.0530
Epoch 174/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9988 - loss: 0.0500
Epoch 175/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9988 - loss: 0.0506
Epoch 176/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9976 - loss: 0.0493
Epoch 177/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9974 - loss: 0.0483
Epoch 178/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9985 - loss: 0.0497
Epoch 179/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9988 - loss: 0.0494
Epoch 180/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9982 - loss: 0.0467
Epoch 181/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9985 - loss: 0.0470
Epoch 182/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9984 - loss: 0.0476
Epoch 183/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9992 - loss: 0.0454
Epoch 184/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9991 - loss: 0.0455
Epoch 185/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9990 - loss: 0.0428
Epoch 186/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9990 - loss: 0.0423
Epoch 187/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9983 - loss: 0.0419
Epoch 188/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9992 - loss: 0.0428
Epoch 189/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9989 - loss: 0.0420
Epoch 190/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9999 - loss: 0.0396
Epoch 191/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9993 - loss: 0.0396
Epoch 192/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9992 - loss: 0.0404
Epoch 193/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9994 - loss: 0.0388
Epoch 194/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9986 - loss: 0.0405
Epoch 195/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9997 - loss: 0.0378
Epoch 196/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9989 - loss: 0.0373
Epoch 197/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9998 - loss: 0.0355
Epoch 198/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9998 - loss: 0.0365
Epoch 199/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9996 - loss: 0.0357
Epoch 200/200
131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9993 - loss: 0.0338

Evaluation:¶

After training, the model is evaluated on the entire dataset. Predictions are generated, enabling the calculation of accuracy, precision, recall, and F1 score. Displaying the first ten predictions alongside the actual values provides a direct comparison, allowing for assessment of prediction quality.

In [116]:
# Plot learning curves
plt.plot(history.history['loss'], label='Loss')
plt.plot(history.history['accuracy'], label='Accuracy')
plt.title('Learning Curves')
plt.xlabel('Epochs')
plt.ylabel('Value')
plt.legend()
plt.show()
print('\n')
# 2. Evaluation: The model is evaluated on the entire dataset, and predictions are made,
# followed by calculating additional metrics (precision, recall, and F1 score).
accuracy = model.evaluate(X_normalized, y, verbose=1)[1]
print(f'Final accuracy on entire dataset: {accuracy * 100:.2f}%')
print('\n')
# Check model predictions
predictions = model.predict(X_normalized).flatten()  # Get raw predictions
predictions_binary = np.round(predictions)  # Convert predictions to binary (0 or 1)
print('\n')
# Analyze predictions
print(f'Predictions:         {predictions[:10]}')
print(f'Binary Predictions:  {predictions_binary[:10]}')
print(f'True Values:         {y[:10]}')

# Additional Metrics
precision = precision_score(y, predictions_binary)
recall = recall_score(y, predictions_binary)
f1 = f1_score(y, predictions_binary)

print(f'Precision:           {precision * 100:.2f}%')
print(f'Recall:              {recall * 100:.2f}')
print(f'F1 Score:            {f1 * 100:.2f}')
No description has been provided for this image

131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.9998 - loss: 0.0339
Final accuracy on entire dataset: 99.98%


131/131 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step


Predictions:         [0.00 0.00 0.00 0.00 0.00 0.99 0.99 0.85 0.99 0.00]
Binary Predictions:  [0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 0.00]
True Values:         [0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 0.00]
Precision:           99.95%
Recall:              100.00
F1 Score:            99.98

Iterative Model Growth:¶

The function iteratively_increase_model systematically tests various configurations of the neural network by varying the number of neurons in the hidden layer. The architectures tested include 2, 4, 8, 16, 32, and 64 neurons. Each configuration's accuracy is evaluated:

Architecture Evaluation: After fitting the model for each specified number of neurons, its accuracy is calculated. All tested architectures are logged for later review. Overfitting Check: The evaluation stops if any architecture achieves an accuracy of 99.5%, which could indicate overfitting. Summary of Results: After testing all configurations, a summary is printed to show the performance of each tested architecture and highlight the best-performing configuration. This iterative process aims to discover the optimal architecture that can overfit the dataset, which will later guide efforts to simplify the model for better generalization. The function iteratively_increase_model tests various configurations by changing the number of neurons in the hidden layer. It tracks the accuracy for each architecture tested and identifies the configuration that performs best. The process stops when a model achieves at least 99.5% accuracy, indicating potential overfitting.

The function also records and prints a summary of results, detailing how each architecture performed, which is crucial in determining the minimal architecture needed to potentially overfit the dataset. Insights gained from this iterative growth will guide the process of model simplification for generalization in later phases.

In [117]:
# 3. Iterative Model Growth: A separate function is implemented to test various neuron configurations (2, 4, 8, 16 neurons).
def iteratively_increase_model():
    architectures_tested = []
    best_accuracy = 0
    best_architecture = None

    # Vary the number of neurons in simple iterative architectures
    for neurons in [2, 4, 8, 16, 32, 64]:
        print(f"\nTesting architecture with {neurons} neurons")
        model = create_compile_model((X_normalized.shape[1],), [neurons])
        history = model.fit(X_normalized, y, epochs=500, verbose=0)  # Increase epochs for deeper training

        # Evaluate the model on the entire dataset
        accuracy = model.evaluate(X_normalized, y, verbose=0)[1]
        architectures_tested.append((neurons, accuracy * 100))

        print(f'Architecture with [{neurons}] neurons achieved accuracy: {accuracy * 100:.2f}%')

        # Check if this architecture overfits well
        if accuracy > best_accuracy:
            best_accuracy = accuracy
            best_architecture = neurons

        # Break if exceeding 99% accuracy signifies overfitting
        if accuracy >= 0.995:
            print(f'Overfit achieved with architecture [{neurons}].\n')
            break
# Print summary of results
    print("\nSummary of model architectures tested:")
    for neurons, acc in architectures_tested:
        print(f'Neurons: {neurons}, Accuracy: {acc:.2f}%')

    print(f"\nBest Architecture: [{best_architecture}] neurons with accuracy {best_accuracy * 100:.2f}%")

# Run the architecture testing
iteratively_increase_model()
Testing architecture with 2 neurons
Architecture with [2] neurons achieved accuracy: 100.00%
Overfit achieved with architecture [2].


Summary of model architectures tested:
Neurons: 2, Accuracy: 100.00%

Best Architecture: [2] neurons with accuracy 100.00%

Normalize the dataset¶

  • Step 5: Normalize (if needed)
  • Step 6: Review the dimensions of training & validation set
In [47]:
shuffled_dataset = dataset1.sample(frac=1, random_state=42).reset_index(drop=True)

# Separate features and target
index_20percent = int(0.2 * len(shuffled_dataset))  # Get 20% of the total number of rows
print(index_20percent)
XVALIDATION = shuffled_dataset.iloc[:index_20percent, :-2].values
YVALIDATION = shuffled_dataset.iloc[:index_20percent, -1].values

XTRAIN = shuffled_dataset.iloc[index_20percent:, :-2].values
YTRAIN = shuffled_dataset.iloc[index_20percent:, -1].values

plt.hist(XTRAIN[:, 7])
plt.ylabel('8th Column (fixed Rings)')
plt.show()
XTRAIN_df = pd.DataFrame(XTRAIN)
XTRAIN_df.head()
835
No description has been provided for this image
Out[47]:
0 1 2 3 4 5 6 7
0 2.0 0.415 0.325 0.100 0.3850 0.1670 0.0800 0.1250
1 1.0 0.280 0.200 0.080 0.0915 0.0330 0.0215 0.0300
2 2.0 0.185 0.130 0.045 0.0290 0.0120 0.0075 0.0095
3 0.0 0.550 0.380 0.165 1.2050 0.5430 0.2940 0.3345
4 1.0 0.360 0.295 0.130 0.2765 0.0895 0.0570 0.1005
In [48]:
# Histogram for training set output labels
plt.figure(figsize=(10, 4))  # Set figure size
plt.hist(YTRAIN, bins=30, color='blue', alpha=0.7)  # Adjust number of bins and color
plt.title('Distribution of Output Labels in Training Set')  # Title for the histogram
plt.xlabel('Output Labels')  # X-axis label
plt.ylabel('Frequency')      # Y-axis label
plt.grid(axis='y', alpha=0.5)  # Optional grid for better readability
plt.show()

# Histogram for validation set output labels
plt.figure(figsize=(10, 4))  # Set figure size
plt.hist(YVALIDATION, bins=30, color='orange', alpha=0.7)  # Adjust number of bins and color
plt.title('Distribution of Output Labels in Validation Set')  # Title for the histogram
plt.xlabel('Output Labels')  # X-axis label
plt.ylabel('Frequency')      # Y-axis label
plt.grid(axis='y', alpha=0.5)  # Optional grid for better readability
plt.show()
No description has been provided for this image
No description has been provided for this image
In [118]:
# Convert to NumPy array after preprocessing
dataset = np.array(dataset1)

# Shuffle the dataset
# Convert dataset back to DataFrame for shuffling and further processing
dataset = pd.DataFrame(dataset, columns=['Sex', 'Length', 'Diameter', 'Height', 'Whole weight',
                                       'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings', 'Is old'])
shuffled_dataset = dataset.sample(frac=1, random_state=42).reset_index(drop=True)

# Separate features and target
X = shuffled_dataset[['Sex', 'Length', 'Diameter', 'Height', 'Whole weight',
                      'Shucked weight', 'Viscera weight', 'Shell weight']]
y = shuffled_dataset['Is old']

# Min-Max Normalization
normalized_X = (X - X.min()) / (X.max() - X.min())

# Split indices
split_index = int(0.8 * len(normalized_X))
XTRAIN = normalized_X.iloc[:split_index]
YTRAIN = y.iloc[:split_index]
XVALIDATION = normalized_X.iloc[split_index:]
YVALIDATION = y.iloc[split_index:]
In [119]:
# Print shapes of the datasets
print("Shapes of the datasets:")
print("XTRAIN shape:", XTRAIN.shape)
print("YTRAIN shape:", YTRAIN.shape)
print("X_VALIDATION_normalized shape:", XVALIDATION.shape)
print("X_TRAIN_normalized shape:", XTRAIN.shape)
print("YVALIDATION shape:", YVALIDATION.shape)

# Print first three rows of each dataset
print("\nFirst three entries of the datasets:")
print("XTRAIN (first 3 rows):")
print(XTRAIN.iloc[0:3])  # Use .iloc for row slicing in DataFrames
print("YTRAIN (first 3 entries):")
print(YTRAIN.iloc[0:3]) # Use .iloc for row slicing in Series

print("\nX_VALIDATION_normalized (first 3 rows):")
print(XVALIDATION.iloc[0:3])  # Use .iloc for row slicing in DataFrames
print("X_TRAIN_normalized (first 3 rows):")
print(XTRAIN.iloc[0:3])  # Use .iloc for row slicing in DataFrames

print("\nYVALIDATION (first 3 entries):")
print(YVALIDATION.iloc[0:3])  # Use .iloc for row slicing in Series
Shapes of the datasets:
XTRAIN shape: (3340, 8)
YTRAIN shape: (3340,)
X_VALIDATION_normalized shape: (835, 8)
X_TRAIN_normalized shape: (3340, 8)
YVALIDATION shape: (835,)

First three entries of the datasets:
XTRAIN (first 3 rows):
   Sex    Length  Diameter    Height  Whole weight  Shucked weight  \
0  0.0  0.479730  0.453782  0.098214      0.156897        0.110289   
1  0.5  0.777027  0.773109  0.156250      0.486099        0.503699   
2  0.0  0.797297  0.789916  0.151786      0.558350        0.455279   

   Viscera weight  Shell weight  
0        0.130349      0.152965  
1        0.236998      0.366218  
2        0.300856      0.452915  
YTRAIN (first 3 entries):
0    0.0
1    1.0
2    1.0
Name: Is old, dtype: float64

X_VALIDATION_normalized (first 3 rows):
      Sex    Length  Diameter    Height  Whole weight  Shucked weight  \
3340  0.5  0.533784  0.537815  0.107143      0.204533        0.143578   
3341  0.0  0.804054  0.890756  0.133929      0.462901        0.365501   
3342  0.0  0.486486  0.495798  0.098214      0.161679        0.128447   

      Viscera weight  Shell weight  
3340        0.215273      0.192825  
3341        0.387097      0.410065  
3342        0.131007      0.128052  
X_TRAIN_normalized (first 3 rows):
   Sex    Length  Diameter    Height  Whole weight  Shucked weight  \
0  0.0  0.479730  0.453782  0.098214      0.156897        0.110289   
1  0.5  0.777027  0.773109  0.156250      0.486099        0.503699   
2  0.0  0.797297  0.789916  0.151786      0.558350        0.455279   

   Viscera weight  Shell weight  
0        0.130349      0.152965  
1        0.236998      0.366218  
2        0.300856      0.452915  

YVALIDATION (first 3 entries):
3340    1.0
3341    1.0
3342    1.0
Name: Is old, dtype: float64
In [120]:
# Plot histogram for the 8th column of the normalized training set
plt.figure(figsize=(10, 5))  # Set figure size for better visibility
# Access the 8th column (index 7) using .iloc
plt.hist(XTRAIN.iloc[:, 7], bins=30, color='skyblue', alpha=0.7)  # Specify the number of bins and colors
plt.title('Distribution of the 8th Column (Fixed Rings) in Normalized Training Set')  # Title for the histogram
plt.xlabel('Fixed Rings')  # X-axis label
plt.ylabel('Frequency')     # Y-axis label
plt.grid(axis='y', alpha=0.5)  # Optional grid for better readability
plt.show()
No description has been provided for this image
In [121]:
# Calculate the correlation matrix only for numeric columns.
dataset_df = pd.DataFrame(XTRAIN)
plt.figure(figsize=(8, 6))  # Adjust the figure size if desired
sns.heatmap(dataset_df.select_dtypes(include='number').corr(), annot=True, cmap='coolwarm')
plt.show()
No description has been provided for this image

Phase 3: Model selection & evaluation¶

  • Step 7: Create a neural network model
  • Step 8: Compile the model
  • Step 9: Train the model
  • Step 10: Check the learning curves
  • Step 11: Evaluate the model on the training data
  • Step 12: Evaluate on validation set
  • Step 13: Check what the model actually predicts
  • Step 14: Is 'accuracy' sufficient to evaluate our model?

This code implements a machine learning workflow:¶

that starts by defining and training a logistic regression model as a baseline for binary classification. It then evaluates multiple neural network architectures to improve performance, using ReLU activation in hidden layers and a sigmoid activation for the output layer. The models are trained and evaluated based on key metrics, including accuracy, precision, recall, and F1 score. Learning curves are plotted to visualize performance over epochs, and the best-performing model is saved using model checkpointing and early stop. Finally, a classification report summarizes the model's predictive capabilities, identifying the architecture that achieved the highest validation accuracy.

In [77]:
# Load dataset
BASE_PATH = 'https://raw.githubusercontent.com/Alakhras/Abalone-Age/main/Abalone.csv'
dataset = pd.read_csv(BASE_PATH)

# Clean the dataset and preprocess
dataset = dataset[dataset.Height != 0]
dataset["Rings"] = np.where(dataset["Rings"] < 9 , 0, 1)
dataset['Sex'] = dataset['Sex'].map({'M': 1, 'F': 0, 'I': 2}).astype(float)

# Shuffle the dataset
shuffled_dataset = dataset.sample(frac=1, random_state=42).reset_index(drop=True)

# Separate features and target
X = shuffled_dataset[['Sex', 'Length', 'Diameter', 'Height', 'Whole weight',
                      'Shucked weight', 'Viscera weight', 'Shell weight']]
y = shuffled_dataset['Rings']

# Min-Max Normalization
normalized_X = (X - X.min()) / (X.max() - X.min())

# Split indices
split_index = int(0.8 * len(normalized_X))
XTRAIN = normalized_X.iloc[:split_index]
YTRAIN = y.iloc[:split_index]
XVALIDATION = normalized_X.iloc[split_index:]
YVALIDATION = y.iloc[split_index:]

XTRAIN.head()
Out[77]:
Sex Length Diameter Height Whole weight Shucked weight Viscera weight Shell weight
0 0.5 0.695946 0.680672 0.129464 0.320170 0.219233 0.194865 0.332337
1 0.5 0.317568 0.319328 0.075893 0.052417 0.034633 0.044108 0.046338
2 1.0 0.398649 0.378151 0.071429 0.081813 0.060188 0.071099 0.068261
3 1.0 0.608108 0.630252 0.125000 0.260138 0.216207 0.211323 0.212755
4 0.0 0.689189 0.680672 0.138393 0.374004 0.326160 0.328506 0.291480
In [78]:
# Model Checkpointing and Early Stopping
checkpoint_path = 'best_model.keras' # Changed the file extension to .h5 and removed the weights from the file name
checkpoint = ModelCheckpoint(checkpoint_path, save_best_only=True, monitor='val_loss', mode='min', verbose=1) # Removed save_weights_only=True
early_stopping = EarlyStopping(monitor='val_loss', mode='min', patience=20, restore_best_weights=True,verbose=1)

# Enhanced Model Creation Function
def create_model(layer_sizes):
    model = models.Sequential()
    for size in layer_sizes:
        model.add(layers.Dense(size, activation='relu'))
        model.add(layers.BatchNormalization())
        model.add(layers.Dropout(0.2))  # Adding Dropout for regularization
    model.add(layers.Dense(1, activation='sigmoid'))  # For binary classification
    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
    return model

# Define neural network architectures to evaluate
model_architectures = [
    (2, 1),  # Model 1
    (4, 1),  # Model 2
    (8, 1),  # Model 3
    (16, 8, 1),  # Model 4
    (32, 16, 8, 1),  # Model 5
    (64, 32, 16, 8, 1)  # Model 6
]

# Store results
results = []

# Train and evaluate neural network models
for arch in model_architectures:
    model = create_model(arch)
    history = model.fit(XTRAIN, YTRAIN, validation_data=(XVALIDATION, YVALIDATION),
                        epochs=200, batch_size=8, verbose=1,
                        callbacks=[checkpoint, early_stopping])
    # Evaluate the model
    train_loss, train_acc = model.evaluate(XTRAIN, YTRAIN, verbose=0)
    val_loss, val_acc = model.evaluate(XVALIDATION, YVALIDATION, verbose=0)
    total_params = model.count_params()

    results.append((str(arch), train_acc, val_acc, train_loss, val_loss, total_params))
Epoch 1/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7492 - loss: 0.5873
Epoch 1: val_loss improved from inf to 0.48657, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.7492 - loss: 0.5872 - val_accuracy: 0.8240 - val_loss: 0.4866
Epoch 2/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7633 - loss: 0.5453
Epoch 2: val_loss improved from 0.48657 to 0.46536, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7632 - loss: 0.5453 - val_accuracy: 0.8287 - val_loss: 0.4654
Epoch 3/200
400/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7564 - loss: 0.5526
Epoch 3: val_loss improved from 0.46536 to 0.45849, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7565 - loss: 0.5520 - val_accuracy: 0.8287 - val_loss: 0.4585
Epoch 4/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7560 - loss: 0.5339
Epoch 4: val_loss did not improve from 0.45849
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7561 - loss: 0.5339 - val_accuracy: 0.8263 - val_loss: 0.4585
Epoch 5/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7471 - loss: 0.5459
Epoch 5: val_loss improved from 0.45849 to 0.44639, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7474 - loss: 0.5457 - val_accuracy: 0.8323 - val_loss: 0.4464
Epoch 6/200
403/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7585 - loss: 0.5150
Epoch 6: val_loss did not improve from 0.44639
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7587 - loss: 0.5150 - val_accuracy: 0.8299 - val_loss: 0.4503
Epoch 7/200
395/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7623 - loss: 0.5231
Epoch 7: val_loss improved from 0.44639 to 0.44014, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7626 - loss: 0.5228 - val_accuracy: 0.8347 - val_loss: 0.4401
Epoch 8/200
412/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7429 - loss: 0.5358
Epoch 8: val_loss did not improve from 0.44014
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7432 - loss: 0.5355 - val_accuracy: 0.8335 - val_loss: 0.4458
Epoch 9/200
393/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7546 - loss: 0.5277
Epoch 9: val_loss did not improve from 0.44014
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7554 - loss: 0.5274 - val_accuracy: 0.8347 - val_loss: 0.4455
Epoch 10/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7703 - loss: 0.5195
Epoch 10: val_loss improved from 0.44014 to 0.43673, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7703 - loss: 0.5195 - val_accuracy: 0.8347 - val_loss: 0.4367
Epoch 11/200
414/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7671 - loss: 0.5176
Epoch 11: val_loss did not improve from 0.43673
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7670 - loss: 0.5178 - val_accuracy: 0.8299 - val_loss: 0.4518
Epoch 12/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7592 - loss: 0.5304
Epoch 12: val_loss did not improve from 0.43673
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7591 - loss: 0.5304 - val_accuracy: 0.8335 - val_loss: 0.4450
Epoch 13/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7667 - loss: 0.5224
Epoch 13: val_loss did not improve from 0.43673
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7667 - loss: 0.5224 - val_accuracy: 0.8335 - val_loss: 0.4426
Epoch 14/200
399/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7600 - loss: 0.5199
Epoch 14: val_loss did not improve from 0.43673
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7600 - loss: 0.5202 - val_accuracy: 0.8311 - val_loss: 0.4574
Epoch 15/200
395/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7743 - loss: 0.5107
Epoch 15: val_loss did not improve from 0.43673
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7739 - loss: 0.5112 - val_accuracy: 0.8359 - val_loss: 0.4437
Epoch 16/200
395/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7667 - loss: 0.5228
Epoch 16: val_loss did not improve from 0.43673
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7663 - loss: 0.5232 - val_accuracy: 0.8323 - val_loss: 0.4580
Epoch 17/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7390 - loss: 0.5447
Epoch 17: val_loss improved from 0.43673 to 0.43657, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7397 - loss: 0.5442 - val_accuracy: 0.8395 - val_loss: 0.4366
Epoch 18/200
412/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7713 - loss: 0.5173
Epoch 18: val_loss did not improve from 0.43657
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7712 - loss: 0.5174 - val_accuracy: 0.8347 - val_loss: 0.4450
Epoch 19/200
401/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7629 - loss: 0.5275
Epoch 19: val_loss did not improve from 0.43657
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7628 - loss: 0.5275 - val_accuracy: 0.8347 - val_loss: 0.4403
Epoch 20/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7566 - loss: 0.5256
Epoch 20: val_loss did not improve from 0.43657
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7568 - loss: 0.5256 - val_accuracy: 0.8347 - val_loss: 0.4433
Epoch 21/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7654 - loss: 0.5211
Epoch 21: val_loss improved from 0.43657 to 0.43490, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7654 - loss: 0.5211 - val_accuracy: 0.8383 - val_loss: 0.4349
Epoch 22/200
400/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7725 - loss: 0.5216
Epoch 22: val_loss did not improve from 0.43490
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7724 - loss: 0.5217 - val_accuracy: 0.8395 - val_loss: 0.4401
Epoch 23/200
413/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7602 - loss: 0.5268
Epoch 23: val_loss improved from 0.43490 to 0.43333, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7603 - loss: 0.5266 - val_accuracy: 0.8371 - val_loss: 0.4333
Epoch 24/200
399/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7571 - loss: 0.5283
Epoch 24: val_loss did not improve from 0.43333
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7574 - loss: 0.5282 - val_accuracy: 0.8347 - val_loss: 0.4464
Epoch 25/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7752 - loss: 0.5072
Epoch 25: val_loss did not improve from 0.43333
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7750 - loss: 0.5074 - val_accuracy: 0.8383 - val_loss: 0.4358
Epoch 26/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7550 - loss: 0.5182
Epoch 26: val_loss improved from 0.43333 to 0.43155, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7551 - loss: 0.5182 - val_accuracy: 0.8371 - val_loss: 0.4315
Epoch 27/200
414/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7741 - loss: 0.5208
Epoch 27: val_loss did not improve from 0.43155
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7740 - loss: 0.5209 - val_accuracy: 0.8335 - val_loss: 0.4410
Epoch 28/200
408/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7626 - loss: 0.5255
Epoch 28: val_loss did not improve from 0.43155
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7627 - loss: 0.5253 - val_accuracy: 0.8359 - val_loss: 0.4337
Epoch 29/200
391/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7702 - loss: 0.5234
Epoch 29: val_loss did not improve from 0.43155
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7698 - loss: 0.5235 - val_accuracy: 0.8347 - val_loss: 0.4367
Epoch 30/200
410/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7643 - loss: 0.5158
Epoch 30: val_loss did not improve from 0.43155
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7644 - loss: 0.5157 - val_accuracy: 0.8383 - val_loss: 0.4386
Epoch 31/200
409/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7561 - loss: 0.5339
Epoch 31: val_loss did not improve from 0.43155
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7564 - loss: 0.5335 - val_accuracy: 0.8299 - val_loss: 0.4365
Epoch 32/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7716 - loss: 0.5056
Epoch 32: val_loss improved from 0.43155 to 0.42571, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7714 - loss: 0.5059 - val_accuracy: 0.8299 - val_loss: 0.4257
Epoch 33/200
414/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7619 - loss: 0.5167
Epoch 33: val_loss did not improve from 0.42571
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7619 - loss: 0.5167 - val_accuracy: 0.8359 - val_loss: 0.4509
Epoch 34/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7686 - loss: 0.5043
Epoch 34: val_loss did not improve from 0.42571
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7684 - loss: 0.5046 - val_accuracy: 0.8335 - val_loss: 0.4363
Epoch 35/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7553 - loss: 0.5310
Epoch 35: val_loss did not improve from 0.42571
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7553 - loss: 0.5309 - val_accuracy: 0.8335 - val_loss: 0.4297
Epoch 36/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7466 - loss: 0.5312
Epoch 36: val_loss did not improve from 0.42571
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7466 - loss: 0.5311 - val_accuracy: 0.8359 - val_loss: 0.4424
Epoch 37/200
401/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7417 - loss: 0.5196
Epoch 37: val_loss did not improve from 0.42571
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7420 - loss: 0.5195 - val_accuracy: 0.8383 - val_loss: 0.4263
Epoch 38/200
399/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7677 - loss: 0.4906
Epoch 38: val_loss did not improve from 0.42571
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7670 - loss: 0.4919 - val_accuracy: 0.8359 - val_loss: 0.4375
Epoch 39/200
400/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7821 - loss: 0.4857
Epoch 39: val_loss improved from 0.42571 to 0.41678, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7812 - loss: 0.4864 - val_accuracy: 0.8359 - val_loss: 0.4168
Epoch 40/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7514 - loss: 0.5080
Epoch 40: val_loss improved from 0.41678 to 0.41058, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7515 - loss: 0.5080 - val_accuracy: 0.8395 - val_loss: 0.4106
Epoch 41/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7684 - loss: 0.4872
Epoch 41: val_loss did not improve from 0.41058
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7683 - loss: 0.4873 - val_accuracy: 0.8299 - val_loss: 0.4338
Epoch 42/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7679 - loss: 0.5016
Epoch 42: val_loss improved from 0.41058 to 0.40737, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7677 - loss: 0.5018 - val_accuracy: 0.8395 - val_loss: 0.4074
Epoch 43/200
408/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7658 - loss: 0.4960
Epoch 43: val_loss did not improve from 0.40737
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7657 - loss: 0.4961 - val_accuracy: 0.8407 - val_loss: 0.4162
Epoch 44/200
396/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7710 - loss: 0.4937
Epoch 44: val_loss did not improve from 0.40737
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7708 - loss: 0.4937 - val_accuracy: 0.8395 - val_loss: 0.4124
Epoch 45/200
396/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7736 - loss: 0.4950
Epoch 45: val_loss improved from 0.40737 to 0.40593, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7732 - loss: 0.4948 - val_accuracy: 0.8383 - val_loss: 0.4059
Epoch 46/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7715 - loss: 0.4781
Epoch 46: val_loss improved from 0.40593 to 0.39580, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7715 - loss: 0.4781 - val_accuracy: 0.8431 - val_loss: 0.3958
Epoch 47/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7677 - loss: 0.4907
Epoch 47: val_loss did not improve from 0.39580
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7674 - loss: 0.4911 - val_accuracy: 0.8419 - val_loss: 0.4089
Epoch 48/200
391/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7622 - loss: 0.4885
Epoch 48: val_loss did not improve from 0.39580
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7614 - loss: 0.4892 - val_accuracy: 0.8467 - val_loss: 0.4085
Epoch 49/200
394/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7540 - loss: 0.5050
Epoch 49: val_loss did not improve from 0.39580
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7545 - loss: 0.5045 - val_accuracy: 0.8419 - val_loss: 0.3971
Epoch 50/200
398/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7614 - loss: 0.4867
Epoch 50: val_loss improved from 0.39580 to 0.39260, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7611 - loss: 0.4870 - val_accuracy: 0.8479 - val_loss: 0.3926
Epoch 51/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7600 - loss: 0.4906
Epoch 51: val_loss improved from 0.39260 to 0.39236, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7600 - loss: 0.4906 - val_accuracy: 0.8491 - val_loss: 0.3924
Epoch 52/200
413/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7507 - loss: 0.5015
Epoch 52: val_loss did not improve from 0.39236
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7507 - loss: 0.5014 - val_accuracy: 0.8491 - val_loss: 0.3984
Epoch 53/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7626 - loss: 0.4862
Epoch 53: val_loss did not improve from 0.39236
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7626 - loss: 0.4862 - val_accuracy: 0.8383 - val_loss: 0.4044
Epoch 54/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7594 - loss: 0.4891
Epoch 54: val_loss improved from 0.39236 to 0.38014, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7595 - loss: 0.4891 - val_accuracy: 0.8491 - val_loss: 0.3801
Epoch 55/200
398/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7782 - loss: 0.4682
Epoch 55: val_loss improved from 0.38014 to 0.37970, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7774 - loss: 0.4690 - val_accuracy: 0.8515 - val_loss: 0.3797
Epoch 56/200
400/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7514 - loss: 0.5001
Epoch 56: val_loss did not improve from 0.37970
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7517 - loss: 0.4999 - val_accuracy: 0.8467 - val_loss: 0.3991
Epoch 57/200
403/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7585 - loss: 0.4981
Epoch 57: val_loss did not improve from 0.37970
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7585 - loss: 0.4982 - val_accuracy: 0.8455 - val_loss: 0.3862
Epoch 58/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7655 - loss: 0.4830
Epoch 58: val_loss did not improve from 0.37970
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7656 - loss: 0.4830 - val_accuracy: 0.8419 - val_loss: 0.3966
Epoch 59/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7579 - loss: 0.4996
Epoch 59: val_loss did not improve from 0.37970
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7578 - loss: 0.4997 - val_accuracy: 0.8443 - val_loss: 0.4125
Epoch 60/200
409/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7610 - loss: 0.4886
Epoch 60: val_loss did not improve from 0.37970
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7609 - loss: 0.4888 - val_accuracy: 0.8455 - val_loss: 0.3947
Epoch 61/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7828 - loss: 0.4693
Epoch 61: val_loss did not improve from 0.37970
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7827 - loss: 0.4694 - val_accuracy: 0.8479 - val_loss: 0.3898
Epoch 62/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7670 - loss: 0.4721
Epoch 62: val_loss did not improve from 0.37970
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7669 - loss: 0.4723 - val_accuracy: 0.8443 - val_loss: 0.3856
Epoch 63/200
397/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7825 - loss: 0.4667
Epoch 63: val_loss did not improve from 0.37970
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7821 - loss: 0.4671 - val_accuracy: 0.8359 - val_loss: 0.4069
Epoch 64/200
397/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7448 - loss: 0.5198
Epoch 64: val_loss did not improve from 0.37970
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7450 - loss: 0.5190 - val_accuracy: 0.8371 - val_loss: 0.4035
Epoch 65/200
391/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7613 - loss: 0.4783
Epoch 65: val_loss did not improve from 0.37970
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7615 - loss: 0.4788 - val_accuracy: 0.8347 - val_loss: 0.3970
Epoch 66/200
394/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7631 - loss: 0.4976
Epoch 66: val_loss did not improve from 0.37970
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7623 - loss: 0.4980 - val_accuracy: 0.8443 - val_loss: 0.4012
Epoch 67/200
401/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7552 - loss: 0.5082
Epoch 67: val_loss did not improve from 0.37970
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7556 - loss: 0.5076 - val_accuracy: 0.8359 - val_loss: 0.3957
Epoch 68/200
412/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7649 - loss: 0.4910
Epoch 68: val_loss did not improve from 0.37970
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7648 - loss: 0.4911 - val_accuracy: 0.8419 - val_loss: 0.3934
Epoch 69/200
407/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7650 - loss: 0.4791
Epoch 69: val_loss improved from 0.37970 to 0.37717, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7650 - loss: 0.4792 - val_accuracy: 0.8503 - val_loss: 0.3772
Epoch 70/200
396/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7756 - loss: 0.4790
Epoch 70: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7751 - loss: 0.4791 - val_accuracy: 0.8431 - val_loss: 0.3923
Epoch 71/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7706 - loss: 0.4778
Epoch 71: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7706 - loss: 0.4779 - val_accuracy: 0.8407 - val_loss: 0.3989
Epoch 72/200
407/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7644 - loss: 0.4819
Epoch 72: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7645 - loss: 0.4819 - val_accuracy: 0.8371 - val_loss: 0.4039
Epoch 73/200
408/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7622 - loss: 0.4847
Epoch 73: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7620 - loss: 0.4850 - val_accuracy: 0.8407 - val_loss: 0.4111
Epoch 74/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7556 - loss: 0.4849
Epoch 74: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7556 - loss: 0.4849 - val_accuracy: 0.8503 - val_loss: 0.3800
Epoch 75/200
407/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7736 - loss: 0.4726
Epoch 75: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7733 - loss: 0.4731 - val_accuracy: 0.8084 - val_loss: 0.4280
Epoch 76/200
397/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7422 - loss: 0.5095
Epoch 76: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7428 - loss: 0.5086 - val_accuracy: 0.8407 - val_loss: 0.3826
Epoch 77/200
414/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7637 - loss: 0.4900
Epoch 77: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7636 - loss: 0.4900 - val_accuracy: 0.8455 - val_loss: 0.3975
Epoch 78/200
408/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7744 - loss: 0.4646
Epoch 78: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7743 - loss: 0.4649 - val_accuracy: 0.8359 - val_loss: 0.4068
Epoch 79/200
407/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7504 - loss: 0.4932
Epoch 79: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7506 - loss: 0.4931 - val_accuracy: 0.8479 - val_loss: 0.3806
Epoch 80/200
401/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7563 - loss: 0.4873
Epoch 80: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7567 - loss: 0.4870 - val_accuracy: 0.8383 - val_loss: 0.3788
Epoch 81/200
407/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7625 - loss: 0.4793
Epoch 81: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7625 - loss: 0.4795 - val_accuracy: 0.8359 - val_loss: 0.3834
Epoch 82/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7666 - loss: 0.4797
Epoch 82: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7665 - loss: 0.4795 - val_accuracy: 0.8443 - val_loss: 0.3837
Epoch 83/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7491 - loss: 0.5067
Epoch 83: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7493 - loss: 0.5065 - val_accuracy: 0.8455 - val_loss: 0.3895
Epoch 84/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7582 - loss: 0.4942
Epoch 84: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7583 - loss: 0.4942 - val_accuracy: 0.8419 - val_loss: 0.3847
Epoch 85/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7668 - loss: 0.4859
Epoch 85: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7667 - loss: 0.4860 - val_accuracy: 0.8407 - val_loss: 0.4003
Epoch 86/200
413/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7842 - loss: 0.4750
Epoch 86: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7840 - loss: 0.4751 - val_accuracy: 0.8395 - val_loss: 0.4108
Epoch 87/200
401/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7680 - loss: 0.4813
Epoch 87: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7681 - loss: 0.4813 - val_accuracy: 0.8347 - val_loss: 0.3837
Epoch 88/200
414/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7638 - loss: 0.4867
Epoch 88: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7639 - loss: 0.4867 - val_accuracy: 0.8479 - val_loss: 0.3991
Epoch 89/200
404/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7608 - loss: 0.4804
Epoch 89: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7608 - loss: 0.4807 - val_accuracy: 0.8347 - val_loss: 0.4086
Epoch 89: early stopping
Restoring model weights from the end of the best epoch: 69.
Epoch 1/200
398/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.3940 - loss: 0.7594
Epoch 1: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.3998 - loss: 0.7574 - val_accuracy: 0.6683 - val_loss: 0.6490
Epoch 2/200
410/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.6489 - loss: 0.6485
Epoch 2: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.6492 - loss: 0.6481 - val_accuracy: 0.6683 - val_loss: 0.5675
Epoch 3/200
412/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.6931 - loss: 0.5915
Epoch 3: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.6936 - loss: 0.5912 - val_accuracy: 0.8216 - val_loss: 0.5150
Epoch 4/200
396/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7462 - loss: 0.5522
Epoch 4: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7466 - loss: 0.5520 - val_accuracy: 0.8240 - val_loss: 0.4896
Epoch 5/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7615 - loss: 0.5390
Epoch 5: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7618 - loss: 0.5388 - val_accuracy: 0.8263 - val_loss: 0.4695
Epoch 6/200
396/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7788 - loss: 0.5102
Epoch 6: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7787 - loss: 0.5105 - val_accuracy: 0.8335 - val_loss: 0.4557
Epoch 7/200
409/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7774 - loss: 0.5188
Epoch 7: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7773 - loss: 0.5188 - val_accuracy: 0.8299 - val_loss: 0.4532
Epoch 8/200
397/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7683 - loss: 0.5256
Epoch 8: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7682 - loss: 0.5258 - val_accuracy: 0.8240 - val_loss: 0.4538
Epoch 9/200
396/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7746 - loss: 0.5174
Epoch 9: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7749 - loss: 0.5168 - val_accuracy: 0.8371 - val_loss: 0.4444
Epoch 10/200
410/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7790 - loss: 0.5133
Epoch 10: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7789 - loss: 0.5135 - val_accuracy: 0.8335 - val_loss: 0.4455
Epoch 11/200
402/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7821 - loss: 0.5107
Epoch 11: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7820 - loss: 0.5107 - val_accuracy: 0.8335 - val_loss: 0.4418
Epoch 12/200
403/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7746 - loss: 0.5210
Epoch 12: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7748 - loss: 0.5207 - val_accuracy: 0.8275 - val_loss: 0.4428
Epoch 13/200
402/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7856 - loss: 0.5018
Epoch 13: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.7854 - loss: 0.5021 - val_accuracy: 0.8347 - val_loss: 0.4385
Epoch 14/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7857 - loss: 0.4991
Epoch 14: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7856 - loss: 0.4993 - val_accuracy: 0.8323 - val_loss: 0.4388
Epoch 15/200
393/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7749 - loss: 0.5133
Epoch 15: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7752 - loss: 0.5128 - val_accuracy: 0.8347 - val_loss: 0.4359
Epoch 16/200
393/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7929 - loss: 0.5074
Epoch 16: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7925 - loss: 0.5074 - val_accuracy: 0.8299 - val_loss: 0.4382
Epoch 17/200
397/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7863 - loss: 0.5114
Epoch 17: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7864 - loss: 0.5114 - val_accuracy: 0.8395 - val_loss: 0.4354
Epoch 18/200
398/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7902 - loss: 0.4986
Epoch 18: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7895 - loss: 0.4993 - val_accuracy: 0.8395 - val_loss: 0.4378
Epoch 19/200
403/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7822 - loss: 0.4974
Epoch 19: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7816 - loss: 0.4984 - val_accuracy: 0.8359 - val_loss: 0.4432
Epoch 20/200
407/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7827 - loss: 0.5155
Epoch 20: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7826 - loss: 0.5154 - val_accuracy: 0.8371 - val_loss: 0.4389
Epoch 20: early stopping
Restoring model weights from the end of the best epoch: 1.
Epoch 1/200
412/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.5837 - loss: 0.7919
Epoch 1: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 5s 3ms/step - accuracy: 0.5852 - loss: 0.7897 - val_accuracy: 0.7605 - val_loss: 0.5385
Epoch 2/200
395/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7501 - loss: 0.5384
Epoch 2: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7505 - loss: 0.5379 - val_accuracy: 0.8311 - val_loss: 0.4539
Epoch 3/200
410/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7594 - loss: 0.5311
Epoch 3: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7595 - loss: 0.5311 - val_accuracy: 0.8335 - val_loss: 0.4463
Epoch 4/200
404/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7594 - loss: 0.5313
Epoch 4: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7595 - loss: 0.5311 - val_accuracy: 0.8287 - val_loss: 0.4413
Epoch 5/200
403/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7664 - loss: 0.5060
Epoch 5: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7663 - loss: 0.5063 - val_accuracy: 0.8335 - val_loss: 0.4277
Epoch 6/200
395/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7612 - loss: 0.5068
Epoch 6: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7614 - loss: 0.5066 - val_accuracy: 0.8335 - val_loss: 0.4245
Epoch 7/200
410/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7574 - loss: 0.5221
Epoch 7: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7576 - loss: 0.5217 - val_accuracy: 0.8323 - val_loss: 0.4186
Epoch 8/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7739 - loss: 0.5074
Epoch 8: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7739 - loss: 0.5074 - val_accuracy: 0.8347 - val_loss: 0.4147
Epoch 9/200
404/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7659 - loss: 0.4943
Epoch 9: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7659 - loss: 0.4947 - val_accuracy: 0.8395 - val_loss: 0.4129
Epoch 10/200
409/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7770 - loss: 0.4895
Epoch 10: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7769 - loss: 0.4896 - val_accuracy: 0.8407 - val_loss: 0.4043
Epoch 11/200
401/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7767 - loss: 0.4887
Epoch 11: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7766 - loss: 0.4888 - val_accuracy: 0.8371 - val_loss: 0.4017
Epoch 12/200
396/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7797 - loss: 0.4804
Epoch 12: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7794 - loss: 0.4809 - val_accuracy: 0.8359 - val_loss: 0.4016
Epoch 13/200
403/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7822 - loss: 0.4812
Epoch 13: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7819 - loss: 0.4815 - val_accuracy: 0.8395 - val_loss: 0.4120
Epoch 14/200
413/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7794 - loss: 0.4786
Epoch 14: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7794 - loss: 0.4786 - val_accuracy: 0.8335 - val_loss: 0.3773
Epoch 15/200
414/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7658 - loss: 0.4826
Epoch 15: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7659 - loss: 0.4826 - val_accuracy: 0.8359 - val_loss: 0.3804
Epoch 16/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7848 - loss: 0.4621
Epoch 16: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7847 - loss: 0.4623 - val_accuracy: 0.8383 - val_loss: 0.3829
Epoch 17/200
407/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7792 - loss: 0.4824
Epoch 17: val_loss did not improve from 0.37717
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7792 - loss: 0.4823 - val_accuracy: 0.8383 - val_loss: 0.3848
Epoch 18/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7671 - loss: 0.4819
Epoch 18: val_loss improved from 0.37717 to 0.37120, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 6ms/step - accuracy: 0.7671 - loss: 0.4819 - val_accuracy: 0.8407 - val_loss: 0.3712
Epoch 19/200
407/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7870 - loss: 0.4579
Epoch 19: val_loss improved from 0.37120 to 0.37020, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7869 - loss: 0.4581 - val_accuracy: 0.8407 - val_loss: 0.3702
Epoch 20/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7676 - loss: 0.5005
Epoch 20: val_loss did not improve from 0.37020
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7678 - loss: 0.5001 - val_accuracy: 0.8395 - val_loss: 0.3781
Epoch 21/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7813 - loss: 0.4638
Epoch 21: val_loss did not improve from 0.37020
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7813 - loss: 0.4636 - val_accuracy: 0.8347 - val_loss: 0.3703
Epoch 22/200
407/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7780 - loss: 0.4694
Epoch 22: val_loss did not improve from 0.37020
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7783 - loss: 0.4692 - val_accuracy: 0.8419 - val_loss: 0.3720
Epoch 23/200
413/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7860 - loss: 0.4595
Epoch 23: val_loss improved from 0.37020 to 0.36656, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7861 - loss: 0.4595 - val_accuracy: 0.8395 - val_loss: 0.3666
Epoch 24/200
410/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7937 - loss: 0.4593
Epoch 24: val_loss did not improve from 0.36656
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7936 - loss: 0.4595 - val_accuracy: 0.8383 - val_loss: 0.3696
Epoch 25/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7711 - loss: 0.4871
Epoch 25: val_loss did not improve from 0.36656
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7711 - loss: 0.4870 - val_accuracy: 0.8419 - val_loss: 0.3702
Epoch 26/200
403/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7793 - loss: 0.4716
Epoch 26: val_loss improved from 0.36656 to 0.36636, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7796 - loss: 0.4712 - val_accuracy: 0.8407 - val_loss: 0.3664
Epoch 27/200
401/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7849 - loss: 0.4766
Epoch 27: val_loss did not improve from 0.36636
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7848 - loss: 0.4763 - val_accuracy: 0.8395 - val_loss: 0.3718
Epoch 28/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7835 - loss: 0.4674
Epoch 28: val_loss did not improve from 0.36636
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7835 - loss: 0.4674 - val_accuracy: 0.8419 - val_loss: 0.3779
Epoch 29/200
409/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7855 - loss: 0.4515
Epoch 29: val_loss did not improve from 0.36636
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7854 - loss: 0.4518 - val_accuracy: 0.8407 - val_loss: 0.3760
Epoch 30/200
407/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7691 - loss: 0.4889
Epoch 30: val_loss improved from 0.36636 to 0.36626, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7696 - loss: 0.4882 - val_accuracy: 0.8407 - val_loss: 0.3663
Epoch 31/200
397/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7937 - loss: 0.4527
Epoch 31: val_loss did not improve from 0.36626
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7935 - loss: 0.4529 - val_accuracy: 0.8407 - val_loss: 0.3711
Epoch 32/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7933 - loss: 0.4478
Epoch 32: val_loss did not improve from 0.36626
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7933 - loss: 0.4479 - val_accuracy: 0.8431 - val_loss: 0.3725
Epoch 33/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7866 - loss: 0.4683
Epoch 33: val_loss did not improve from 0.36626
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7866 - loss: 0.4683 - val_accuracy: 0.8395 - val_loss: 0.3714
Epoch 34/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7807 - loss: 0.4608
Epoch 34: val_loss did not improve from 0.36626
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7809 - loss: 0.4608 - val_accuracy: 0.8407 - val_loss: 0.3680
Epoch 35/200
403/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7880 - loss: 0.4628
Epoch 35: val_loss did not improve from 0.36626
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7878 - loss: 0.4630 - val_accuracy: 0.8407 - val_loss: 0.3688
Epoch 36/200
413/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7796 - loss: 0.4714
Epoch 36: val_loss improved from 0.36626 to 0.36619, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7798 - loss: 0.4713 - val_accuracy: 0.8419 - val_loss: 0.3662
Epoch 37/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7927 - loss: 0.4584
Epoch 37: val_loss did not improve from 0.36619
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.7927 - loss: 0.4585 - val_accuracy: 0.8443 - val_loss: 0.3703
Epoch 38/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7975 - loss: 0.4530
Epoch 38: val_loss did not improve from 0.36619
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7974 - loss: 0.4531 - val_accuracy: 0.8383 - val_loss: 0.3666
Epoch 39/200
402/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7752 - loss: 0.4818
Epoch 39: val_loss did not improve from 0.36619
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7755 - loss: 0.4814 - val_accuracy: 0.8419 - val_loss: 0.3726
Epoch 40/200
413/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7850 - loss: 0.4568
Epoch 40: val_loss did not improve from 0.36619
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7850 - loss: 0.4568 - val_accuracy: 0.8371 - val_loss: 0.3725
Epoch 41/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7921 - loss: 0.4584
Epoch 41: val_loss improved from 0.36619 to 0.36522, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7921 - loss: 0.4584 - val_accuracy: 0.8431 - val_loss: 0.3652
Epoch 42/200
402/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7790 - loss: 0.4552
Epoch 42: val_loss improved from 0.36522 to 0.36287, saving model to best_model.keras
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7792 - loss: 0.4552 - val_accuracy: 0.8407 - val_loss: 0.3629
Epoch 43/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7664 - loss: 0.4782
Epoch 43: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7665 - loss: 0.4781 - val_accuracy: 0.8443 - val_loss: 0.3637
Epoch 44/200
399/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7740 - loss: 0.4729
Epoch 44: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7746 - loss: 0.4723 - val_accuracy: 0.8455 - val_loss: 0.3644
Epoch 45/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7960 - loss: 0.4596
Epoch 45: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7956 - loss: 0.4598 - val_accuracy: 0.8431 - val_loss: 0.3771
Epoch 46/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7811 - loss: 0.4724
Epoch 46: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7811 - loss: 0.4724 - val_accuracy: 0.8443 - val_loss: 0.3705
Epoch 47/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7828 - loss: 0.4691
Epoch 47: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7828 - loss: 0.4690 - val_accuracy: 0.8407 - val_loss: 0.3711
Epoch 48/200
410/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7864 - loss: 0.4594
Epoch 48: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7865 - loss: 0.4594 - val_accuracy: 0.8443 - val_loss: 0.3695
Epoch 49/200
400/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7767 - loss: 0.4748
Epoch 49: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7768 - loss: 0.4747 - val_accuracy: 0.8455 - val_loss: 0.3693
Epoch 50/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7804 - loss: 0.4623
Epoch 50: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7803 - loss: 0.4623 - val_accuracy: 0.8431 - val_loss: 0.3696
Epoch 51/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7859 - loss: 0.4662
Epoch 51: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7859 - loss: 0.4661 - val_accuracy: 0.8419 - val_loss: 0.3756
Epoch 52/200
400/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7763 - loss: 0.4696
Epoch 52: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7762 - loss: 0.4693 - val_accuracy: 0.8395 - val_loss: 0.3689
Epoch 53/200
408/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7793 - loss: 0.4876
Epoch 53: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7794 - loss: 0.4872 - val_accuracy: 0.8419 - val_loss: 0.3646
Epoch 54/200
410/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7904 - loss: 0.4606
Epoch 54: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7902 - loss: 0.4607 - val_accuracy: 0.8515 - val_loss: 0.3742
Epoch 55/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7786 - loss: 0.4581
Epoch 55: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7785 - loss: 0.4584 - val_accuracy: 0.8467 - val_loss: 0.3691
Epoch 56/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7936 - loss: 0.4538
Epoch 56: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7935 - loss: 0.4539 - val_accuracy: 0.8455 - val_loss: 0.3811
Epoch 57/200
414/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7694 - loss: 0.4715
Epoch 57: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7696 - loss: 0.4714 - val_accuracy: 0.8407 - val_loss: 0.3806
Epoch 58/200
410/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7975 - loss: 0.4414
Epoch 58: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7974 - loss: 0.4416 - val_accuracy: 0.8491 - val_loss: 0.3658
Epoch 59/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7943 - loss: 0.4544
Epoch 59: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7943 - loss: 0.4545 - val_accuracy: 0.8419 - val_loss: 0.3717
Epoch 60/200
413/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7979 - loss: 0.4574
Epoch 60: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7978 - loss: 0.4574 - val_accuracy: 0.8431 - val_loss: 0.3753
Epoch 61/200
407/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7836 - loss: 0.4619
Epoch 61: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7835 - loss: 0.4622 - val_accuracy: 0.8455 - val_loss: 0.3701
Epoch 62/200
397/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7864 - loss: 0.4468
Epoch 62: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7864 - loss: 0.4475 - val_accuracy: 0.8443 - val_loss: 0.3651
Epoch 62: early stopping
Restoring model weights from the end of the best epoch: 42.
Epoch 1/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7137 - loss: 0.6191
Epoch 1: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 5s 5ms/step - accuracy: 0.7139 - loss: 0.6189 - val_accuracy: 0.8287 - val_loss: 0.5169
Epoch 2/200
410/418 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7713 - loss: 0.5222
Epoch 2: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7713 - loss: 0.5220 - val_accuracy: 0.8228 - val_loss: 0.4340
Epoch 3/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7716 - loss: 0.5007
Epoch 3: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7716 - loss: 0.5006 - val_accuracy: 0.8371 - val_loss: 0.4046
Epoch 4/200
408/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7677 - loss: 0.4927
Epoch 4: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.7677 - loss: 0.4928 - val_accuracy: 0.8371 - val_loss: 0.4082
Epoch 5/200
402/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7740 - loss: 0.4876
Epoch 5: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.7740 - loss: 0.4876 - val_accuracy: 0.8359 - val_loss: 0.4019
Epoch 6/200
409/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7537 - loss: 0.5093
Epoch 6: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7539 - loss: 0.5090 - val_accuracy: 0.8383 - val_loss: 0.4109
Epoch 7/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7766 - loss: 0.4919
Epoch 7: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7766 - loss: 0.4918 - val_accuracy: 0.8383 - val_loss: 0.3875
Epoch 8/200
413/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7452 - loss: 0.5022
Epoch 8: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7454 - loss: 0.5020 - val_accuracy: 0.8371 - val_loss: 0.3873
Epoch 9/200
414/418 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7818 - loss: 0.4644
Epoch 9: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7817 - loss: 0.4646 - val_accuracy: 0.8359 - val_loss: 0.3769
Epoch 10/200
410/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7838 - loss: 0.4819
Epoch 10: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7837 - loss: 0.4818 - val_accuracy: 0.8419 - val_loss: 0.3712
Epoch 11/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7871 - loss: 0.4594
Epoch 11: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7871 - loss: 0.4594 - val_accuracy: 0.8407 - val_loss: 0.3791
Epoch 12/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7790 - loss: 0.4593
Epoch 12: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7791 - loss: 0.4592 - val_accuracy: 0.8431 - val_loss: 0.3636
Epoch 13/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7796 - loss: 0.4706
Epoch 13: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7796 - loss: 0.4707 - val_accuracy: 0.8395 - val_loss: 0.3857
Epoch 14/200
401/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7783 - loss: 0.4758
Epoch 14: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7782 - loss: 0.4759 - val_accuracy: 0.8407 - val_loss: 0.3996
Epoch 15/200
409/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7835 - loss: 0.4632
Epoch 15: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7835 - loss: 0.4633 - val_accuracy: 0.8431 - val_loss: 0.3781
Epoch 16/200
412/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7801 - loss: 0.4761
Epoch 16: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7802 - loss: 0.4760 - val_accuracy: 0.8419 - val_loss: 0.3723
Epoch 17/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7727 - loss: 0.4775
Epoch 17: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7728 - loss: 0.4774 - val_accuracy: 0.8455 - val_loss: 0.3682
Epoch 18/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7926 - loss: 0.4494
Epoch 18: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7924 - loss: 0.4497 - val_accuracy: 0.8359 - val_loss: 0.3881
Epoch 19/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7758 - loss: 0.4693
Epoch 19: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7758 - loss: 0.4693 - val_accuracy: 0.8491 - val_loss: 0.3793
Epoch 20/200
403/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7742 - loss: 0.4785
Epoch 20: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7743 - loss: 0.4782 - val_accuracy: 0.8395 - val_loss: 0.3676
Epoch 20: early stopping
Restoring model weights from the end of the best epoch: 1.
Epoch 1/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.6286 - loss: 0.6896
Epoch 1: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 6s 4ms/step - accuracy: 0.6287 - loss: 0.6894 - val_accuracy: 0.8036 - val_loss: 0.5415
Epoch 2/200
412/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7447 - loss: 0.5282
Epoch 2: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.7449 - loss: 0.5282 - val_accuracy: 0.8275 - val_loss: 0.4465
Epoch 3/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7649 - loss: 0.5203
Epoch 3: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7649 - loss: 0.5203 - val_accuracy: 0.8359 - val_loss: 0.4343
Epoch 4/200
414/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7677 - loss: 0.5004
Epoch 4: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7677 - loss: 0.5004 - val_accuracy: 0.8371 - val_loss: 0.4335
Epoch 5/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7728 - loss: 0.4961
Epoch 5: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7728 - loss: 0.4961 - val_accuracy: 0.8383 - val_loss: 0.4048
Epoch 6/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7791 - loss: 0.4784
Epoch 6: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7790 - loss: 0.4787 - val_accuracy: 0.8371 - val_loss: 0.4048
Epoch 7/200
408/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7617 - loss: 0.4931
Epoch 7: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7618 - loss: 0.4930 - val_accuracy: 0.8443 - val_loss: 0.4057
Epoch 8/200
408/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7689 - loss: 0.5039
Epoch 8: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7690 - loss: 0.5035 - val_accuracy: 0.8407 - val_loss: 0.4156
Epoch 9/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7796 - loss: 0.4824
Epoch 9: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7796 - loss: 0.4824 - val_accuracy: 0.8347 - val_loss: 0.3939
Epoch 10/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7772 - loss: 0.4798
Epoch 10: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 6ms/step - accuracy: 0.7771 - loss: 0.4799 - val_accuracy: 0.8395 - val_loss: 0.3990
Epoch 11/200
412/418 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7662 - loss: 0.4830
Epoch 11: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7663 - loss: 0.4830 - val_accuracy: 0.8395 - val_loss: 0.3930
Epoch 12/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7776 - loss: 0.4810
Epoch 12: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7773 - loss: 0.4811 - val_accuracy: 0.8347 - val_loss: 0.3915
Epoch 13/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7858 - loss: 0.4709
Epoch 13: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.7858 - loss: 0.4710 - val_accuracy: 0.8431 - val_loss: 0.4034
Epoch 14/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7717 - loss: 0.4757
Epoch 14: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.7717 - loss: 0.4757 - val_accuracy: 0.8431 - val_loss: 0.3806
Epoch 15/200
409/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7714 - loss: 0.4808
Epoch 15: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7714 - loss: 0.4806 - val_accuracy: 0.8323 - val_loss: 0.3885
Epoch 16/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7746 - loss: 0.4836
Epoch 16: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7747 - loss: 0.4835 - val_accuracy: 0.8263 - val_loss: 0.3949
Epoch 17/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7898 - loss: 0.4668
Epoch 17: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.7898 - loss: 0.4667 - val_accuracy: 0.8383 - val_loss: 0.3830
Epoch 18/200
409/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7884 - loss: 0.4614
Epoch 18: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7882 - loss: 0.4617 - val_accuracy: 0.8419 - val_loss: 0.3895
Epoch 19/200
407/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7540 - loss: 0.4848
Epoch 19: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7546 - loss: 0.4843 - val_accuracy: 0.8419 - val_loss: 0.3779
Epoch 20/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7908 - loss: 0.4492
Epoch 20: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.7905 - loss: 0.4498 - val_accuracy: 0.8455 - val_loss: 0.3939
Epoch 20: early stopping
Restoring model weights from the end of the best epoch: 1.
Epoch 1/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.5644 - loss: 0.7587
Epoch 1: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 8s 7ms/step - accuracy: 0.5654 - loss: 0.7571 - val_accuracy: 0.7377 - val_loss: 0.5858
Epoch 2/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7004 - loss: 0.5519
Epoch 2: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7007 - loss: 0.5520 - val_accuracy: 0.8263 - val_loss: 0.4703
Epoch 3/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7408 - loss: 0.5390
Epoch 3: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7408 - loss: 0.5389 - val_accuracy: 0.8311 - val_loss: 0.4573
Epoch 4/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7696 - loss: 0.5177
Epoch 4: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7695 - loss: 0.5177 - val_accuracy: 0.8359 - val_loss: 0.4285
Epoch 5/200
414/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7851 - loss: 0.4931
Epoch 5: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.7850 - loss: 0.4932 - val_accuracy: 0.8359 - val_loss: 0.4181
Epoch 6/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7847 - loss: 0.4790
Epoch 6: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 6ms/step - accuracy: 0.7847 - loss: 0.4790 - val_accuracy: 0.8371 - val_loss: 0.4179
Epoch 7/200
410/418 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.7687 - loss: 0.4981
Epoch 7: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 6ms/step - accuracy: 0.7689 - loss: 0.4980 - val_accuracy: 0.8359 - val_loss: 0.4204
Epoch 8/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7730 - loss: 0.4897
Epoch 8: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.7730 - loss: 0.4897 - val_accuracy: 0.8407 - val_loss: 0.4102
Epoch 9/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7751 - loss: 0.4890
Epoch 9: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7750 - loss: 0.4890 - val_accuracy: 0.8371 - val_loss: 0.4054
Epoch 10/200
413/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7807 - loss: 0.4751
Epoch 10: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7806 - loss: 0.4752 - val_accuracy: 0.8395 - val_loss: 0.3878
Epoch 11/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7697 - loss: 0.4904
Epoch 11: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 6ms/step - accuracy: 0.7697 - loss: 0.4904 - val_accuracy: 0.8383 - val_loss: 0.3961
Epoch 12/200
412/418 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7724 - loss: 0.4870
Epoch 12: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 6ms/step - accuracy: 0.7725 - loss: 0.4868 - val_accuracy: 0.8395 - val_loss: 0.3830
Epoch 13/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7628 - loss: 0.4923
Epoch 13: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7630 - loss: 0.4920 - val_accuracy: 0.8287 - val_loss: 0.4063
Epoch 14/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7788 - loss: 0.4716
Epoch 14: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7788 - loss: 0.4716 - val_accuracy: 0.8371 - val_loss: 0.3833
Epoch 15/200
412/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7757 - loss: 0.4716
Epoch 15: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7756 - loss: 0.4717 - val_accuracy: 0.8395 - val_loss: 0.4008
Epoch 16/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7654 - loss: 0.4788
Epoch 16: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7655 - loss: 0.4787 - val_accuracy: 0.8395 - val_loss: 0.3876
Epoch 17/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7784 - loss: 0.4723
Epoch 17: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.7784 - loss: 0.4723 - val_accuracy: 0.8419 - val_loss: 0.3793
Epoch 18/200
414/418 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.7841 - loss: 0.4838
Epoch 18: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 6ms/step - accuracy: 0.7842 - loss: 0.4837 - val_accuracy: 0.8347 - val_loss: 0.3865
Epoch 19/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7810 - loss: 0.4791
Epoch 19: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7810 - loss: 0.4791 - val_accuracy: 0.8371 - val_loss: 0.3825
Epoch 20/200
413/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7793 - loss: 0.4618
Epoch 20: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7792 - loss: 0.4620 - val_accuracy: 0.8323 - val_loss: 0.3919
Epoch 20: early stopping
Restoring model weights from the end of the best epoch: 1.

Results summary¶

In [79]:
# Convert results to DataFrame for better visualization
results_df = pd.DataFrame(results, columns=['Model Architecture', 'Acc. on Training Set', 'Acc. on Validation Set', 'Training Loss', 'Validation Loss', 'Total Parameters'])

# Print results summary
print("\nResults Summary:\n")
print(results_df.to_string(index=False))

# Load the best model based on validation loss
best_model = models.load_model(checkpoint_path) # Now models is defined and load_model can be accessed
print("\nBest model loaded from:", checkpoint_path)
Results Summary:

Model Architecture  Acc. on Training Set  Acc. on Validation Set  Training Loss  Validation Loss  Total Parameters
            (2, 1)              0.840719                0.850299       0.382159         0.377166                35
            (4, 1)              0.662275                0.668263       0.652111         0.648960                63
            (8, 1)              0.843114                0.840719       0.365871         0.362867               119
        (16, 8, 1)              0.818263                0.828743       0.519413         0.516859               391
    (32, 16, 8, 1)              0.816168                0.803593       0.542900         0.541522              1191
(64, 32, 16, 8, 1)              0.707784                0.737725       0.590900         0.585836              3815

Best model loaded from: best_model.keras

Plot Learning Curves:¶

Learning curves for each architecture are plotted to visualize how both training and validation accuracies evolve over epochs. This helps identify how well each model learns and if any of them are overfitting.

Load the Best Model: The model with the best performance (lowest validation loss) is loaded for final evaluation.

In [81]:
def plot_all_learning_curves(histories, model_names):
    plt.figure(figsize=(10, 6))

    # Loop through each model's history and plot accuracy
    for history, arch in zip(histories, model_names):
        plt.plot(history.history['accuracy'], label='Train Accuracy ' + str(arch))
        plt.plot(history.history['val_accuracy'], label='Validation Accuracy ' + str(arch))

    # Customize the plot
    plt.title('Learning Curves for Various Architectures')
    plt.xlabel('Epochs')
    plt.ylabel('Accuracy')
    plt.legend()
    plt.grid()
    plt.show()

# Assuming 'histories' contains the training history for each model
# and 'model_names' is the list of corresponding model names
# Instead of using undefined variables, store the history objects during model training
histories = []  # Initialize an empty list to store histories
model_names = ['Model1', 'Model2', 'Model3']  # Define model names

# Train the first three models and store their histories
for i, arch in enumerate(model_architectures[:3]):  # Train only the first three models
    model = create_model(arch)
    history = model.fit(XTRAIN, YTRAIN, validation_data=(XVALIDATION, YVALIDATION),
                        epochs=200, batch_size=8, verbose=1,
                        callbacks=[checkpoint, early_stopping])
    histories.append(history)  # Append the history object to the list

# Now you can plot the learning curves using the stored histories
plot_all_learning_curves(histories, model_names)
Epoch 1/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7291 - loss: 0.6026
Epoch 1: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.7291 - loss: 0.6023 - val_accuracy: 0.8156 - val_loss: 0.5300
Epoch 2/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7350 - loss: 0.5505
Epoch 2: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7350 - loss: 0.5505 - val_accuracy: 0.8228 - val_loss: 0.4725
Epoch 3/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7441 - loss: 0.5405
Epoch 3: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7443 - loss: 0.5404 - val_accuracy: 0.8251 - val_loss: 0.4523
Epoch 4/200
412/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7384 - loss: 0.5287
Epoch 4: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7384 - loss: 0.5287 - val_accuracy: 0.8228 - val_loss: 0.4563
Epoch 5/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7407 - loss: 0.5220
Epoch 5: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7407 - loss: 0.5220 - val_accuracy: 0.8299 - val_loss: 0.4364
Epoch 6/200
401/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7398 - loss: 0.5250
Epoch 6: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7401 - loss: 0.5248 - val_accuracy: 0.8323 - val_loss: 0.4394
Epoch 7/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7480 - loss: 0.5119
Epoch 7: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7479 - loss: 0.5119 - val_accuracy: 0.8323 - val_loss: 0.4333
Epoch 8/200
397/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7406 - loss: 0.5253
Epoch 8: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7415 - loss: 0.5244 - val_accuracy: 0.8335 - val_loss: 0.4229
Epoch 9/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7464 - loss: 0.5205
Epoch 9: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7462 - loss: 0.5205 - val_accuracy: 0.8335 - val_loss: 0.4288
Epoch 10/200
402/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7620 - loss: 0.5024
Epoch 10: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7620 - loss: 0.5027 - val_accuracy: 0.8371 - val_loss: 0.4215
Epoch 11/200
409/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7637 - loss: 0.5075
Epoch 11: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7637 - loss: 0.5074 - val_accuracy: 0.8251 - val_loss: 0.4327
Epoch 12/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7656 - loss: 0.4970
Epoch 12: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7655 - loss: 0.4971 - val_accuracy: 0.8383 - val_loss: 0.4166
Epoch 13/200
401/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7496 - loss: 0.4969
Epoch 13: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7494 - loss: 0.4974 - val_accuracy: 0.8359 - val_loss: 0.4090
Epoch 14/200
400/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7621 - loss: 0.5032
Epoch 14: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7620 - loss: 0.5031 - val_accuracy: 0.8395 - val_loss: 0.4108
Epoch 15/200
395/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7586 - loss: 0.5022
Epoch 15: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7584 - loss: 0.5018 - val_accuracy: 0.8347 - val_loss: 0.4095
Epoch 16/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7575 - loss: 0.4925
Epoch 16: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7575 - loss: 0.4925 - val_accuracy: 0.8383 - val_loss: 0.3975
Epoch 17/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7732 - loss: 0.4912
Epoch 17: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7731 - loss: 0.4911 - val_accuracy: 0.8168 - val_loss: 0.4305
Epoch 18/200
408/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7647 - loss: 0.4903
Epoch 18: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.7648 - loss: 0.4903 - val_accuracy: 0.8383 - val_loss: 0.4022
Epoch 19/200
399/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7519 - loss: 0.5237
Epoch 19: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7520 - loss: 0.5228 - val_accuracy: 0.8419 - val_loss: 0.4153
Epoch 20/200
398/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7495 - loss: 0.5028
Epoch 20: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7496 - loss: 0.5028 - val_accuracy: 0.8431 - val_loss: 0.4031
Epoch 20: early stopping
Restoring model weights from the end of the best epoch: 1.
Epoch 1/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7124 - loss: 0.6353
Epoch 1: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 4s 3ms/step - accuracy: 0.7129 - loss: 0.6343 - val_accuracy: 0.7868 - val_loss: 0.5399
Epoch 2/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7560 - loss: 0.5511
Epoch 2: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7560 - loss: 0.5511 - val_accuracy: 0.8299 - val_loss: 0.4678
Epoch 3/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7587 - loss: 0.5366
Epoch 3: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7589 - loss: 0.5364 - val_accuracy: 0.8228 - val_loss: 0.4544
Epoch 4/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7590 - loss: 0.5376
Epoch 4: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.7590 - loss: 0.5376 - val_accuracy: 0.8287 - val_loss: 0.4553
Epoch 5/200
409/418 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7692 - loss: 0.5174
Epoch 5: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.7690 - loss: 0.5176 - val_accuracy: 0.8311 - val_loss: 0.4642
Epoch 6/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7715 - loss: 0.5096
Epoch 6: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7715 - loss: 0.5097 - val_accuracy: 0.8275 - val_loss: 0.4479
Epoch 7/200
394/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7838 - loss: 0.4984
Epoch 7: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7831 - loss: 0.4992 - val_accuracy: 0.8323 - val_loss: 0.4434
Epoch 8/200
398/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7799 - loss: 0.5045
Epoch 8: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7793 - loss: 0.5049 - val_accuracy: 0.8335 - val_loss: 0.4354
Epoch 9/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7879 - loss: 0.4988
Epoch 9: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7875 - loss: 0.4993 - val_accuracy: 0.8335 - val_loss: 0.4355
Epoch 10/200
393/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7765 - loss: 0.5152
Epoch 10: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7760 - loss: 0.5153 - val_accuracy: 0.8383 - val_loss: 0.4307
Epoch 11/200
414/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7738 - loss: 0.5072
Epoch 11: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7737 - loss: 0.5073 - val_accuracy: 0.8371 - val_loss: 0.4236
Epoch 12/200
404/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7674 - loss: 0.5116
Epoch 12: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7673 - loss: 0.5118 - val_accuracy: 0.8371 - val_loss: 0.4257
Epoch 13/200
401/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7954 - loss: 0.4757
Epoch 13: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7948 - loss: 0.4764 - val_accuracy: 0.8407 - val_loss: 0.4135
Epoch 14/200
397/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7737 - loss: 0.4871
Epoch 14: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7732 - loss: 0.4877 - val_accuracy: 0.8240 - val_loss: 0.4152
Epoch 15/200
400/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7771 - loss: 0.4748
Epoch 15: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7768 - loss: 0.4752 - val_accuracy: 0.8383 - val_loss: 0.3856
Epoch 16/200
412/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7705 - loss: 0.4878
Epoch 16: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7704 - loss: 0.4879 - val_accuracy: 0.8395 - val_loss: 0.3901
Epoch 17/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7786 - loss: 0.4804
Epoch 17: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7785 - loss: 0.4805 - val_accuracy: 0.8407 - val_loss: 0.3865
Epoch 18/200
411/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7777 - loss: 0.4833
Epoch 18: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7777 - loss: 0.4832 - val_accuracy: 0.8395 - val_loss: 0.3826
Epoch 19/200
415/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7778 - loss: 0.4662
Epoch 19: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7777 - loss: 0.4664 - val_accuracy: 0.8383 - val_loss: 0.3845
Epoch 20/200
397/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7821 - loss: 0.4700
Epoch 20: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7823 - loss: 0.4702 - val_accuracy: 0.8335 - val_loss: 0.3848
Epoch 20: early stopping
Restoring model weights from the end of the best epoch: 1.
Epoch 1/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.5408 - loss: 0.7026
Epoch 1: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.5410 - loss: 0.7025 - val_accuracy: 0.7737 - val_loss: 0.5381
Epoch 2/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7173 - loss: 0.5755
Epoch 2: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7173 - loss: 0.5755 - val_accuracy: 0.7988 - val_loss: 0.4933
Epoch 3/200
406/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7433 - loss: 0.5389
Epoch 3: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7435 - loss: 0.5385 - val_accuracy: 0.8240 - val_loss: 0.4575
Epoch 4/200
403/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7557 - loss: 0.5243
Epoch 4: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7558 - loss: 0.5238 - val_accuracy: 0.8275 - val_loss: 0.4394
Epoch 5/200
418/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7631 - loss: 0.4939
Epoch 5: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7631 - loss: 0.4939 - val_accuracy: 0.8251 - val_loss: 0.4210
Epoch 6/200
412/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7632 - loss: 0.5103
Epoch 6: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7633 - loss: 0.5101 - val_accuracy: 0.8419 - val_loss: 0.4126
Epoch 7/200
416/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7867 - loss: 0.4763
Epoch 7: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7867 - loss: 0.4764 - val_accuracy: 0.8395 - val_loss: 0.4099
Epoch 8/200
414/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7787 - loss: 0.4743
Epoch 8: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7786 - loss: 0.4745 - val_accuracy: 0.8383 - val_loss: 0.4047
Epoch 9/200
401/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7656 - loss: 0.4935
Epoch 9: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7655 - loss: 0.4937 - val_accuracy: 0.8395 - val_loss: 0.4053
Epoch 10/200
397/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7737 - loss: 0.4910
Epoch 10: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7737 - loss: 0.4908 - val_accuracy: 0.8335 - val_loss: 0.4250
Epoch 11/200
417/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7775 - loss: 0.4822
Epoch 11: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7775 - loss: 0.4822 - val_accuracy: 0.8371 - val_loss: 0.3878
Epoch 12/200
407/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7820 - loss: 0.4680
Epoch 12: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7822 - loss: 0.4679 - val_accuracy: 0.8359 - val_loss: 0.3948
Epoch 13/200
402/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7989 - loss: 0.4502
Epoch 13: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7981 - loss: 0.4511 - val_accuracy: 0.8371 - val_loss: 0.3819
Epoch 14/200
394/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7751 - loss: 0.4848
Epoch 14: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7758 - loss: 0.4839 - val_accuracy: 0.8407 - val_loss: 0.3914
Epoch 15/200
410/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7801 - loss: 0.4782
Epoch 15: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7802 - loss: 0.4781 - val_accuracy: 0.8455 - val_loss: 0.3858
Epoch 16/200
405/418 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.7821 - loss: 0.4653
Epoch 16: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7819 - loss: 0.4655 - val_accuracy: 0.8455 - val_loss: 0.3867
Epoch 17/200
399/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7922 - loss: 0.4551
Epoch 17: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.7923 - loss: 0.4547 - val_accuracy: 0.8455 - val_loss: 0.3735
Epoch 18/200
395/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7714 - loss: 0.4771
Epoch 18: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7718 - loss: 0.4768 - val_accuracy: 0.8419 - val_loss: 0.3758
Epoch 19/200
398/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7842 - loss: 0.4684
Epoch 19: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7841 - loss: 0.4683 - val_accuracy: 0.8455 - val_loss: 0.3871
Epoch 20/200
402/418 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.7756 - loss: 0.4706
Epoch 20: val_loss did not improve from 0.36287
418/418 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.7755 - loss: 0.4709 - val_accuracy: 0.8383 - val_loss: 0.3945
Epoch 20: early stopping
Restoring model weights from the end of the best epoch: 1.
No description has been provided for this image

Feature importance visualization using SHAP¶

ROC AUC Evaluation¶

In [82]:
from sklearn.metrics import roc_curve, auc

# ROC AUC Evaluation
y_pred_proba = best_model.predict(XVALIDATION).flatten()  # Get predicted probabilities
fpr, tpr, thresholds = roc_curve(YVALIDATION, y_pred_proba)
roc_auc = auc(fpr, tpr)

# Plot ROC Curve
plt.figure()
plt.plot(fpr, tpr, color='darkorange', lw=2, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plt.show()

# Print ROC AUC Score
print("ROC AUC Score:", roc_auc.round(2) * 100,"%")
27/27 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
No description has been provided for this image
ROC AUC Score: 91.0 %

Predictions and Metrics:¶

Predictions are generated for the validation set using the best model. A custom classification report function calculates metrics such as accuracy, precision, recall, and F1 score based on the predictions, providing a detailed assessment of the model's predictive performance.

Make predictions on the validation set Evaluate the model using additional metrics

In [83]:
# Make predictions on the validation set
y_pred = (best_model.predict(XVALIDATION) > 0.5).astype(int)  # Use best_model for predictions

# Custom classification report function
def classification_report(y_true, y_pred):
    y_pred = y_pred.reshape(y_true.shape)  # Reshape y_pred to match y_true
    tp = np.sum((y_true == 1) & (y_pred == 1))  # True Positives
    tn = np.sum((y_true == 0) & (y_pred == 0))  # True Negatives
    fp = np.sum((y_true == 0) & (y_pred == 1))  # False Positives
    fn = np.sum((y_true == 1) & (y_pred == 0))  # False Negatives

    accuracy = (tp + tn) / len(y_true)
    precision = tp / (tp + fp) if (tp + fp) else 0.0
    recall = tp / (tp + fn) if (tp + fn) else 0.0
    f1_score = 2 * (precision * recall) / (precision + recall) if (precision + recall) else 0.0

    return {
        'Accuracy': accuracy.round(2) * 100,
        'Precision': precision.round(2) * 100,
        'Recall': recall.round(2) * 100,
        'F1 Score': f1_score.round(2) *100
    }

# Evaluate the model using additional metrics
report = classification_report(YVALIDATION, y_pred)

# Print the classification report
print('\nClassification Report:')
for key, value in report.items():
    print(f"{key}: {value:.4f}")
27/27 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step 

Classification Report:
Accuracy: 84.0000
Precision: 84.0000
Recall: 95.0000
F1 Score: 89.0000

Print logistic regression results (example, needs to be defined previously) Replace with your logistic regression results variables (if available)

In [84]:
# Baseline Random Classifier Implementation
from sklearn.metrics import accuracy_score  # Import accuracy_score

class RandomBaselineClassifier:
    def __init__(self):
        self.class_probs = None

    def fit(self, y):
        # Calculate probabilities for each class based on the training target
        class_counts = y.value_counts(normalize=True)
        self.class_probs = class_counts

    def predict(self, X):
        # Randomly assign class based on learned probabilities
        return np.random.choice(self.class_probs.index, size=len(X), p=self.class_probs.values)

# Train the random baseline model
baseline_model = RandomBaselineClassifier()
baseline_model.fit(YTRAIN)

# Make predictions on the validation set
random_predictions = baseline_model.predict(XVALIDATION)
# Print results summary
print("\nResults Summary:")
print("_______________________________________________________________________________________")

# Baseline Logistic Regression Implementation
class LogisticRegression:
    def __init__(self):
        self.weights = None
        self.bias = None

    def fit(self, X, y, epochs=1000, learning_rate=0.01):
        self.weights = np.zeros(X.shape[1])
        self.bias = 0
        m = len(y)

        for _ in range(epochs):
            linear_model = np.dot(X, self.weights) + self.bias
            y_predicted = self.sigmoid(linear_model)
            dw = (1 / m) * np.dot(X.T, (y_predicted - y))
            db = (1 / m) * np.sum(y_predicted - y)
            self.weights -= learning_rate * dw
            self.bias -= learning_rate * db

    def predict(self, X):
        linear_model = np.dot(X, self.weights) + self.bias
        y_predicted = self.sigmoid(linear_model)
        return (y_predicted > 0.5).astype(int)

    @staticmethod
    def sigmoid(x):
        return 1 / (1 + np.exp(-x))

# Train the logistic regression model
logistic_model = LogisticRegression()
logistic_model.fit(XTRAIN, YTRAIN)

# Calculate accuracies
logistic_acc_train = np.mean(logistic_model.predict(XTRAIN) == YTRAIN)  # Accuracy on training set
logistic_acc_val = np.mean(logistic_model.predict(XVALIDATION) == YVALIDATION)  # Accuracy on validation set
# Calculate accuracy for the random baseline model
# Convert results to DataFrame for better visualization
results_df = pd.DataFrame(results, columns=['Model Architecture', 'Acc. on Training Set', 'Acc. on Validation Set', 'Training Loss', 'Validation Loss', 'Total Parameters'])
print(results_df.to_string(index=False))

random_accuracy = accuracy_score(YVALIDATION, random_predictions) # Now accuracy_score is defined
print(f'Random Baseline Classifier Accuracy: {random_accuracy:.4f}')

# Print logistic regression results
print('Logistic Regression - Training Accuracy: {:.4f}  Validation Accuracy: {:.4f}\n'.format(logistic_acc_train, logistic_acc_val))

# Load the best model based on validation loss
best_model = models.load_model(checkpoint_path) # Now models is defined and load_model can be accessed
print("\nBest model loaded from:", checkpoint_path)

# Conclusion about model performance
best_architecture = model_architectures[np.argmax([result[2] for result in results])]
print(f'\nThe best performing model architecture is: {best_architecture} with validation accuracy: {max([result[2] for result in results]):.4f}')

# Extract weights and biases from the trained model
weights = [layer.get_weights()[0] for layer in model.layers if layer.get_weights()]
biases = [layer.get_weights()[1] for layer in model.layers if layer.get_weights()]

# Print the number of weights and biases
print(f"\nNumber of weights for each layer: {[w.shape for w in weights]}")
print(f"Number of biases for each layer: {[b.shape for b in biases]}")
Results Summary:
_______________________________________________________________________________________
Model Architecture  Acc. on Training Set  Acc. on Validation Set  Training Loss  Validation Loss  Total Parameters
            (2, 1)              0.840719                0.850299       0.382159         0.377166                35
            (4, 1)              0.662275                0.668263       0.652111         0.648960                63
            (8, 1)              0.843114                0.840719       0.365871         0.362867               119
        (16, 8, 1)              0.818263                0.828743       0.519413         0.516859               391
    (32, 16, 8, 1)              0.816168                0.803593       0.542900         0.541522              1191
(64, 32, 16, 8, 1)              0.707784                0.737725       0.590900         0.585836              3815
Random Baseline Classifier Accuracy: 0.5365
Logistic Regression - Training Accuracy: 0.7374  Validation Accuracy: 0.7377


Best model loaded from: best_model.keras

The best performing model architecture is: (2, 1) with validation accuracy: 0.8503

Number of weights for each layer: [(8, 8), (8,), (8, 1), (1,), (1, 1)]
Number of biases for each layer: [(8,), (8,), (1,), (1,), (1,)]

Custom prediction function using the trained model

In [85]:
def my_prediction_function(model, data):
    output = data
    for layer in model.layers:
        if layer.get_weights():  # Check if the layer has weights
            # Check if the layer has only weights and bias
            if len(layer.get_weights()) == 2:
                weights, bias = layer.get_weights()  # Extract weights and bias
                output = np.dot(output, weights) + bias  # Apply linear transformation
            # Handle layers with more than two weight elements (e.g., BatchNormalization)
            else:
                # Apply the layer's call method to the output
                output = layer.call(output)

        # Apply activation function (if any)
        if hasattr(layer, 'activation'):
            if layer.activation.__name__ == 'relu':
                output = np.maximum(0, output)
            elif layer.activation.__name__ == 'sigmoid':
                output = 1 / (1 + np.exp(-output))

    return output

# Example of passing the validation data to be predicted
custom_predictions = my_prediction_function(best_model, XVALIDATION)

# Print custom predictions
print("\nCustom Predictions using my_prediction_function:")
print(custom_predictions[35:45].round())
print(y_pred[35:45].round()) # Compare with predictions from the best_model

# Generate predictions on the validation data using your custom function
# CHANGED: Use XVALIDATION instead of XTRAIN to get predictions on the validation data
validation_predictions = my_prediction_function(best_model, XVALIDATION)

# Check if predictions are approximately the same
# CHANGED: Compare custom_predictions with validation_predictions (both on validation data)
assert np.allclose(validation_predictions, custom_predictions), "Predictions do not match!"
print("Predictions from both methods are the same.")
# Final Statements
print("\nProcess completed successfully!")
Custom Predictions using my_prediction_function:
[[0.00]
 [1.00]
 [1.00]
 [0.00]
 [0.00]
 [1.00]
 [0.00]
 [1.00]
 [1.00]
 [0.00]]
[[0]
 [1]
 [1]
 [0]
 [0]
 [1]
 [0]
 [1]
 [1]
 [0]]
Predictions from both methods are the same.

Process completed successfully!

Discussion on Architecture Size for Overfitting with Output as an Additional Input Feature¶

When you incorporate the output variable as an extra input feature in your model, you’re essentially providing the model with more data to work with. This additional information can help the model identify more specific patterns and details in the training data.

In simpler terms, if you use a basic model with fewer layers and neurons, it may have trouble grasping the complexities of the data, particularly with this new input. On the other hand, more complex architectures—those with more layers and neurons—offer greater capacity for learning but come with the risk of overfitting.

To achieve overfitting, you might need a model with several hidden layers and a greater number of neurons in each layer (for instance, a structure like 64-32-16-8). Such a deeper network is likely necessary to effectively capture the relationships and dependencies that arise when including the output variable as input. However, it’s crucial to monitor the validation loss to verify whether overfitting truly occurs.

In [88]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.callbacks import EarlyStopping

# Load dataset
BASE_PATH = 'https://raw.githubusercontent.com/Alakhras/Abalone-Age/main/Abalone.csv'
dataset = pd.read_csv(BASE_PATH)

# Clean the dataset and preprocess
dataset = dataset[dataset['Height'] != 0]  # Remove rows with 'Height' == 0
dataset["Rings"] = np.where(dataset["Rings"] < 9, 0, 1)  # Binary classification for Rings
dataset['Sex'] = dataset['Sex'].map({'M': 1, 'F': 0, 'I': 2}).astype(float)

# Shuffle the dataset
dataset = dataset.sample(frac=1, random_state=42).reset_index(drop=True)

# Separate features and target
X = dataset[['Sex', 'Length', 'Diameter', 'Height', 'Whole weight',
             'Shucked weight', 'Viscera weight', 'Shell weight']]
y = dataset['Rings']

# Add the target variable 'Rings' (output) as a feature (input)
X_with_output = X.copy()  # Copy existing features
X_with_output['Rings'] = y  # Add the target as an input feature

# No normalization to force feature dominance
XTRAIN = X_with_output.iloc[:int(0.8 * len(X_with_output))]
YTRAIN = y.iloc[:int(0.8 * len(y))]
XVALIDATION = X_with_output.iloc[int(0.8 * len(X_with_output)):]
YVALIDATION = y.iloc[int(0.8 * len(y)):]

# Convert to NumPy arrays for Keras
XTRAIN_np = XTRAIN.to_numpy()
YTRAIN_np = YTRAIN.to_numpy()
XVALIDATION_np = XVALIDATION.to_numpy()
YVALIDATION_np = YVALIDATION.to_numpy()

# Define a simple model with one neuron and sigmoid activation
model = Sequential()
model.add(Dense(1, input_dim=XTRAIN_np.shape[1], activation='sigmoid'))  # Single neuron
print(model.summary())

# Compile the model
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])  # Using 'sgd' for simpler optimization

# Add EarlyStopping callback
early_stop = EarlyStopping(monitor='val_loss', patience=10, verbose=1, mode='min', restore_best_weights=True)

# Train the model
history = model.fit(
    XTRAIN_np, YTRAIN_np,
    validation_data=(XVALIDATION_np, YVALIDATION_np),
    epochs=40,  #EarlyStopping will terminate training early
    batch_size=32,
    verbose=1,
    callbacks=[early_stop]
)

# Plot training and validation accuracy
plt.figure(figsize=(8, 6))
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.title('Model Training with EarlyStopping')
plt.show()
Model: "sequential_42"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type)                         ┃ Output Shape                ┃         Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ dense_112 (Dense)                    │ (None, 1)                   │              10 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
 Total params: 10 (40.00 B)
 Trainable params: 10 (40.00 B)
 Non-trainable params: 0 (0.00 B)
None
Epoch 1/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.6558 - loss: 0.6354 - val_accuracy: 0.9617 - val_loss: 0.4826
Epoch 2/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9581 - loss: 0.4603 - val_accuracy: 0.9066 - val_loss: 0.4068
Epoch 3/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9215 - loss: 0.3927 - val_accuracy: 0.9114 - val_loss: 0.3638
Epoch 4/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9224 - loss: 0.3523 - val_accuracy: 0.9234 - val_loss: 0.3327
Epoch 5/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9335 - loss: 0.3240 - val_accuracy: 0.9341 - val_loss: 0.3085
Epoch 6/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9301 - loss: 0.3080 - val_accuracy: 0.9401 - val_loss: 0.2888
Epoch 7/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9421 - loss: 0.2862 - val_accuracy: 0.9449 - val_loss: 0.2725
Epoch 8/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9438 - loss: 0.2722 - val_accuracy: 0.9497 - val_loss: 0.2586
Epoch 9/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9416 - loss: 0.2593 - val_accuracy: 0.9521 - val_loss: 0.2466
Epoch 10/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9539 - loss: 0.2384 - val_accuracy: 0.9521 - val_loss: 0.2360
Epoch 11/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9471 - loss: 0.2315 - val_accuracy: 0.9533 - val_loss: 0.2266
Epoch 12/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9553 - loss: 0.2225 - val_accuracy: 0.9557 - val_loss: 0.2181
Epoch 13/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9548 - loss: 0.2212 - val_accuracy: 0.9569 - val_loss: 0.2104
Epoch 14/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9526 - loss: 0.2085 - val_accuracy: 0.9593 - val_loss: 0.2033
Epoch 15/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9587 - loss: 0.1962 - val_accuracy: 0.9593 - val_loss: 0.1968
Epoch 16/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9522 - loss: 0.1977 - val_accuracy: 0.9605 - val_loss: 0.1908
Epoch 17/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9550 - loss: 0.1893 - val_accuracy: 0.9605 - val_loss: 0.1852
Epoch 18/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9516 - loss: 0.1863 - val_accuracy: 0.9605 - val_loss: 0.1799
Epoch 19/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9531 - loss: 0.1776 - val_accuracy: 0.9617 - val_loss: 0.1749
Epoch 20/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9517 - loss: 0.1749 - val_accuracy: 0.9617 - val_loss: 0.1702
Epoch 21/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9582 - loss: 0.1690 - val_accuracy: 0.9617 - val_loss: 0.1658
Epoch 22/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9612 - loss: 0.1601 - val_accuracy: 0.9617 - val_loss: 0.1616
Epoch 23/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9625 - loss: 0.1543 - val_accuracy: 0.9617 - val_loss: 0.1576
Epoch 24/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9561 - loss: 0.1604 - val_accuracy: 0.9641 - val_loss: 0.1538
Epoch 25/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9596 - loss: 0.1553 - val_accuracy: 0.9641 - val_loss: 0.1502
Epoch 26/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9591 - loss: 0.1536 - val_accuracy: 0.9641 - val_loss: 0.1467
Epoch 27/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9564 - loss: 0.1507 - val_accuracy: 0.9641 - val_loss: 0.1434
Epoch 28/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9640 - loss: 0.1453 - val_accuracy: 0.9641 - val_loss: 0.1402
Epoch 29/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.9644 - loss: 0.1405 - val_accuracy: 0.9665 - val_loss: 0.1371
Epoch 30/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9705 - loss: 0.1330 - val_accuracy: 0.9701 - val_loss: 0.1342
Epoch 31/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9659 - loss: 0.1368 - val_accuracy: 0.9713 - val_loss: 0.1314
Epoch 32/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.9715 - loss: 0.1259 - val_accuracy: 0.9713 - val_loss: 0.1287
Epoch 33/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9697 - loss: 0.1291 - val_accuracy: 0.9725 - val_loss: 0.1261
Epoch 34/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - accuracy: 0.9744 - loss: 0.1245 - val_accuracy: 0.9725 - val_loss: 0.1235
Epoch 35/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9742 - loss: 0.1198 - val_accuracy: 0.9725 - val_loss: 0.1211
Epoch 36/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9797 - loss: 0.1210 - val_accuracy: 0.9725 - val_loss: 0.1187
Epoch 37/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9760 - loss: 0.1187 - val_accuracy: 0.9725 - val_loss: 0.1165
Epoch 38/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9757 - loss: 0.1196 - val_accuracy: 0.9737 - val_loss: 0.1143
Epoch 39/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9792 - loss: 0.1143 - val_accuracy: 0.9749 - val_loss: 0.1122
Epoch 40/40
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9804 - loss: 0.1140 - val_accuracy: 0.9749 - val_loss: 0.1101
Restoring model weights from the end of the best epoch: 40.
No description has been provided for this image

Phase 4: Feature importance and reduction¶

Define and Train Models for Each Feature Individually¶

In [125]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.callbacks import EarlyStopping

# Load dataset
BASE_PATH = 'https://raw.githubusercontent.com/Alakhras/Abalone-Age/main/Abalone.csv'
dataset = pd.read_csv(BASE_PATH)

# Clean the dataset and preprocess
dataset = dataset[dataset['Height'] != 0]  # Remove rows with 'Height' == 0
dataset["Rings"] = np.where(dataset["Rings"] < 9, 0, 1)  # Binary classification for Rings
dataset['Sex'] = dataset['Sex'].map({'M': 1, 'F': 0, 'I': 2}).astype(float)

# Shuffle the dataset
dataset = dataset.sample(frac=1, random_state=42).reset_index(drop=True)

# Data preparation
features = ['Sex', 'Length', 'Diameter', 'Height', 'Whole weight',
            'Shucked weight', 'Viscera weight', 'Shell weight']
target = 'Rings'

# Prepare training and validation dataset
train_data = dataset.sample(frac=0.8, random_state=42)
val_data = dataset.drop(train_data.index)

X_train = train_data[features].to_numpy() # Select only the features for training
y_train = train_data[target].to_numpy()
X_val = val_data[features].to_numpy() # Select only the features for validation
y_val = val_data[target].to_numpy()

# Normalize features only for the training set
X_min = np.min(X_train, axis=0)
X_max = np.max(X_train, axis=0)
X_TRAIN_normalized = (X_train - X_min) / (X_max - X_min)

# Normalize validation data using the same min and max as the training data
# Make sure to select only the feature columns for normalization
X_VALIDATION_normalized = (X_val - X_min) / (X_max - X_min)

# Function to create a simple model
def create_model(input_shape):
    model = keras.Sequential([
        layers.Dense(10, activation='relu', input_shape=(input_shape,)),
        layers.Dense(1, activation='sigmoid')  # For binary classification
    ])
    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
    return model

# Step 2: Train models for each feature individually
accuracies = {}
model_checkpoints = []

for i, feature in enumerate(features):
    model = create_model(1)  # Model for one feature only
    checkpoint = ModelCheckpoint(f'model_{feature}.keras', monitor='val_accuracy', save_best_only=True, mode='max')
    early_stopping = EarlyStopping(monitor='val_accuracy', mode='max', patience=20, restore_best_weights=True,verbose=1)
    model.fit(X_TRAIN_normalized[:, [i]], y_train, epochs=200, batch_size=8, verbose=0,
              validation_data=(X_val[:, [i]], y_val), callbacks=[checkpoint, early_stopping])
    loss, accuracy = model.evaluate(X_val[:, [i]], y_val, verbose=0)
    accuracies[feature] = accuracy
    model_checkpoints.append(checkpoint)
# Print accuracy for each feature
print("Accuracies for individual features:")
for feature, acc in accuracies.items():
    print(f"{feature}: {acc:.4f}")
# Plotting the accuracies of individual features
plt.bar(accuracies.keys(), accuracies.values())
plt.ylim(0, 1)
plt.ylabel('Validation Accuracy')
plt.title('Validation Accuracy by Feature')
plt.show()
Epoch 22: early stopping
Restoring model weights from the end of the best epoch: 2.
Epoch 28: early stopping
Restoring model weights from the end of the best epoch: 8.
Epoch 29: early stopping
Restoring model weights from the end of the best epoch: 9.
Epoch 21: early stopping
Restoring model weights from the end of the best epoch: 1.
Epoch 39: early stopping
Restoring model weights from the end of the best epoch: 19.
Epoch 21: early stopping
Restoring model weights from the end of the best epoch: 1.
Epoch 25: early stopping
Restoring model weights from the end of the best epoch: 5.
Epoch 30: early stopping
Restoring model weights from the end of the best epoch: 10.
Accuracies for individual features:
Sex: 0.8060
Length: 0.8132
Diameter: 0.8120
Height: 0.6814
Whole weight: 0.7617
Shucked weight: 0.6814
Viscera weight: 0.8180
Shell weight: 0.8323
No description has been provided for this image

Identify and Remove Least Important Features¶

In [126]:
# Step 3: Identify least important features and remove them incrementally
sorted_features = sorted(accuracies.items(), key=lambda x: x[1])
new_accuracies = []
remaining_features = features.copy()

for feature, _ in sorted_features:
    remaining_features.remove(feature)  # Remove the least important feature
    if remaining_features:  # Check if there are remaining features
     # Create and fit a new model with the remaining features
        model = create_model(len(remaining_features))
        checkpoint = ModelCheckpoint('reduced_model.keras', monitor='val_accuracy', save_best_only=True, mode='max', verbose=0)
        early_stopping = EarlyStopping(monitor='val_accuracy', mode='max', patience=20, restore_best_weights=True,verbose=0)
        model.fit(X_TRAIN_normalized[:, [features.index(f) for f in remaining_features]], y_train,
                  epochs=200, batch_size=8, verbose=0,
                  validation_data=(X_VALIDATION_normalized[:, [features.index(f) for f in remaining_features]], y_val),
                  callbacks=[checkpoint, early_stopping])

        # Evaluate the model
        loss, accuracy = model.evaluate(X_VALIDATION_normalized[:, [features.index(f) for f in remaining_features]], y_val, verbose=0)
        new_accuracies.append((remaining_features.copy(), accuracy))
# Print the new accuracies
print("Accuracies after feature reduction:")
for feature_set, acc in new_accuracies:
        print(f"Features: {feature_set}, Accuracy: {acc:.4f}")
Accuracies after feature reduction:
Features: ['Sex', 'Length', 'Diameter', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight'], Accuracy: 0.8491
Features: ['Sex', 'Length', 'Diameter', 'Whole weight', 'Viscera weight', 'Shell weight'], Accuracy: 0.8479
Features: ['Sex', 'Length', 'Diameter', 'Viscera weight', 'Shell weight'], Accuracy: 0.8455
Features: ['Length', 'Diameter', 'Viscera weight', 'Shell weight'], Accuracy: 0.8299
Features: ['Length', 'Viscera weight', 'Shell weight'], Accuracy: 0.8251
Features: ['Viscera weight', 'Shell weight'], Accuracy: 0.8263
Features: ['Shell weight'], Accuracy: 0.8323

Plot the Results After Feature Reduction¶

In [127]:
# Step 4: Plot the results after feature reduction
x_labels = [' & '.join(feature_set) for feature_set, acc in new_accuracies]
y_values = [acc for _, acc in new_accuracies]

plt.figure(figsize=(10, 6))
plt.bar(x_labels, y_values)
plt.ylabel('Validation Accuracy')
plt.title('Validation Accuracy with Feature Reduction')
plt.xticks(rotation=45)
plt.ylim(0, 1)
plt.axhline(y=0.5, color='r', linestyle='--')  # Reference line for 0.5 accuracy
plt.show()
No description has been provided for this image

Compare the Original Model with the Reduced Feature Model¶

In [128]:
# Step 5: Compare the original model with the reduced feature model
# Original model (using all features)
final_model = create_model(len(features))
checkpoint = ModelCheckpoint('final_model.keras', monitor='val_accuracy', save_best_only=True, mode='max')
early_stopping = EarlyStopping(monitor='val_accuracy', mode='max', patience=20, restore_best_weights=True,verbose=1)
final_model.fit(X_TRAIN_normalized, y_train, epochs=200, batch_size=8, verbose=0,
                validation_data=(X_VALIDATION_normalized, y_val), callbacks=[checkpoint, early_stopping])

# Evaluate the original model
loss, original_accuracy = final_model.evaluate(X_VALIDATION_normalized, y_val, verbose=0)

# Print the accuracies
print(f'Accuracy with all features: {original_accuracy:.4f}')
Epoch 39: early stopping
Restoring model weights from the end of the best epoch: 19.
Accuracy with all features: 0.8491

Accuracy of the Best Model After Feature Reduction¶

In [129]:
# Accuracy of the best model after feature reduction
if new_accuracies:
    reduced_accuracy = new_accuracies[-1][1]
    print(f'Best accuracy after feature reduction: {reduced_accuracy:.4f}')
else:
    print('No features remain after removal.')
Best accuracy after feature reduction: 0.8323

Show Removed Features¶

In [130]:
# Optional: Show the features that were removed and the final accuracy of the reduced model
removed_features = [feature for feature in sorted_features]
print('Removed Features:', ', '.join([feature[0] for feature in removed_features]))
Removed Features: Height, Shucked weight, Whole weight, Sex, Diameter, Length, Viscera weight, Shell weight

Use model-agnostic methods such as LIME or Shapley values to derive feature importance.¶

In [ ]:
import pandas as pd  # Import pandas if not already imported
import shap

# SHAP explainer initialization for feature importance
X_TRAIN_normalized_df = pd.DataFrame(X_TRAIN_normalized, columns=X.columns[:8]) # Assuming X is your original DataFrame
# SHAP explainer initialization
explainer = shap.KernelExplainer(best_model.predict, X_TRAIN_normalized_df.sample(100).values)  # Use the DataFrame for sampling

# Get a consistent set of 100 samples from XVALIDATION for SHAP
num_samples = min(100, X_VALIDATION_normalized.shape[0]) # Handle cases with fewer than 100 rows
X_validation_sample = X_VALIDATION_normalized[np.random.choice(X_VALIDATION_normalized.shape[0], num_samples, replace=False), :8]

# Calculate SHAP values
shap_values = explainer.shap_values(X_validation_sample)

# If shap_values is a list (for multi-output models), take the first element
if isinstance(shap_values, list):
    shap_values = shap_values[0]

# Ensure shap_values is a 2D array and then select the first output's values
if len(shap_values.shape) == 3:  # Check if it's 3D
    shap_values = shap_values[:, :, 0]  # Select the first output's values
elif len(shap_values.shape) == 1:
    shap_values = shap_values.reshape(-1, 1)

# Verify shapes for consistency
print("Shape of SHAP values:", shap_values.shape)  # Expect (n_samples, n_features)
print("Shape of X_validation_sample:", X_validation_sample.shape)  # Expect (n_samples, n_features)
shap.summary_plot(shap_values, X_validation_sample, feature_names=X.columns.tolist())  # Remove [:, :, 0]
shap.initjs()  # Initialize JavaScript for visualizations

instance_to_explain = X_VALIDATION_normalized[0]  # Remove reshape, it's already 1D
expected_value = explainer.expected_value
if isinstance(expected_value, (np.ndarray, list)):  # Handle both ndarray and list
    expected_value = expected_value[0]  # Take the first element if it's a list or array

# Create the force plot using the reshaped shap_values for a single output
shap.force_plot(expected_value, shap_values[0], instance_to_explain, feature_names=X.columns.tolist())

Step 15: Strategies for Improving Performance¶

  1. Technical Enhancements:

    • Increase Epochs: One approach to enhance model performance is to increase the number of training epochs to 100 or 150. This will provide the model with additional time to learn from the data.
    • Add Additional Layers: Enhancing the complexity of the neural network by adding more layers can increase its learning capacity, allowing it to capture more intricate patterns in the data.
  2. Balance the Dataset:

    • It is crucial to ensure that the dataset contains an equal number of samples for each class. You can achieve this through:
      • Oversampling: Utilize techniques like SMOTE (Synthetic Minority Over-sampling Technique) to generate additional samples for the minority class.
      • Undersampling: Reduce the number of samples in the majority class to maintain balance within the dataset.
  3. Adjust Training and Validation Set Size:

    • Modify the number of records in your training and validation sets based on your needs:
      • Increase Rows: If your dataset is limited, consider gathering more data to augment it.
      • Decrease Rows: If the majority class has a disproportionately large number of records, reducing its size may help in achieving better balance and model performance.
  4. Geographical Location: Including the locations where abalones are collected can help us understand how different environments affect their characteristics.

  5. Abalone Species: Adding information about the species will allow us to account for biological differences that may impact classification.

  6. Color: The color of abalones can vary, and using this feature can help the model better distinguish between different types.

  7. Number of Predators: Knowing how many natural predators are in the area could provide insights into the abalones' health and behaviors, potentially influencing classification.

  8. Living Environment: Features such as the type of habitat (e.g., rocky, sandy) and environmental conditions (like pollution and temperature) are crucial for understanding their characteristics.

By implementing these strategies, we can enhance the effectiveness of our classification model.

In [122]:
%%shell
jupyter nbconvert --to html AI_Final_Prt.ipynb
[NbConvertApp] Converting notebook AI_Final_Prt.ipynb to html
[NbConvertApp] WARNING | Alternative text is missing on 13 image(s).
[NbConvertApp] Writing 5858895 bytes to AI_Final_Prt.html
Out[122]: